BaseN

14.02.2011 - The Immaterial Phone

On Friday Nokia announced that they'll introduce a third operating system - Windows Phone 7 - to some of their high end devices. Compared to serious competition which has either Google Android or Apple iOS, I find this move puzzling. How does one keep up with the evolution of new features when supporting three different OS's?

I've been a Nokia phone user since my Mobira Talkman in the late 1980s, mainly due to Nokia's way of providing me with increasing capabilities at a steady pace. With today's N900 I am able to use every IT system of our company, with an encrypted VPN. In addition, the underlying Linux allows for full scale administration access to our BaseN Platform, a feature not often used - but the capability is there.

However, I've always found changing to a new phone troublesome and time consuming. Calendar, address book, applications and other settings need to be synchronized and this rarely succeeds well with the provided migration applications.

By introducing an immaterial phone, Nokia could, with its strong roots in telecommunications, become a game changer once again. Imagine if the software of your phone would actually exist primarily in the computing cloud, decoupled from hardware? You could then invoke it with a web browser, tablet computer or in a 'surrogate' phone whenever necessary - your settings always being in sync. Upon hardware failure or loss, you would just tell the cloud service the serial number of your new hardware and in a few moments your environment would return. Or you could have multiple synchronized phones.

This would also allow for novel ways to interact with your operator - with IP coverage (WLAN hotspot for instance), all GSM/CDMA traffic could be transported over it, altering the international roaming cartel quite radically. Protocols and technologies for this are quite ready, as we can see from growing deployments of Femtocell base stations connected to consumer broadband.

The existing Ovi portfolio, in parallel to similar services at Google and Apple already offers data storage and backup services, but their success has been limited as they've been marketed only as add-on features. Game changing would require a whole-hearted effort to introduce a truly new phone concept. Someone will do it sooner or later, and I'm hoping it'll be Nokia.

//Pasi

06.08.2013 - Technological Wasteland

Intelligence gathering and services have existed since the earliest civilizations. Until now, however, those have been relatively expensive, requiring a lot of manpower and structures.

Todays's Internet, in turn, provides a very affordable, virtually free data warehouse for intelligence operatives. Recently revealed PRISM and XKeyScore probably carry a combined price tag with a smaller number than a single foreign country unit in the 1980s. Echelon, which caused some stir in the 1990s, was still a formidable investment.

Although I feel strongly about a small government, citizen privacy and free speech, these are in my opinion not the biggest issues when it comes to this massive surveillance of foreign countries, their companies and people - I'm concerned about innovation on the global scale.

If all (national) research and development data is immediately available to a foreign intelligence agency and their contractor companies, it is just a matter of time when groundbreaking innovations and discoveries start to happen in a country that has its own RnD plus everything from, say, a friendly European Union. And I'm not talking about state-industrial espionage which is illegal per international agreements. I'm talking about young people becoming subject matter experts in those vast intelligence organizations, taking next steps on their careers as engineers and inventors at existing and future companies.

My postulate is that a country, or a continent which becomes a technological wasteland steadily degrades into a problem and conflict area. When millions of young people tell that their dream is to work for the government as nothing else pays off (as happened in Egypt), a conflict is at the door.

In the US, National Security Agency (NSA) also has the responsibility for supporting US companies in keeping their confidential data safe. They've introduced eg. Security Enhanced Linux modules, SHA-2 encryption algorithms and a lot of training for technology companies, both private and state-owned.

Here in the EU, we urgently need to educate our research institutes and companies to excel in strong data encryption and related security technologies. Keeping data private should be a citizen skill, fostered by the government. If and when we are to maintain our innovation capabilities.

//Pasi- PGP Public Key 551D0D20

11.06.2013 - Consumption.. Now

People in intelligence branches tend to say that the value of reconnaissance data decreases exponentially by its age. Although historical analysis is important, having real time data available is critical for evolutionally better decision making. Human brain's short term memory just drives our primary cognitive functions.

We're experiencing the same inflation when our utilities present us with 24-to-48 (or sometimes up to two months) delays in energy and water consumption data. This data is no longer actionable and usually just makes us feel guilty - for a while, as the short term memory has an efficient garbage collection system.

Last week we were discussing with one of the largest local utilities about our services and their relation/integration to the existing Automatic Meter Reading (AMR) and Meter Data Management (MDM) systems. When we inquired about their near future requirements and wishes, the initial answer was somewhat startling:

"We'd like to know how much electricity our customers.. are using.. Now."

Well that's something we deliver by default. The current AMR/MDM structures are too often bound to legacy billing cycles of 1-2 months. Smart meters can and will do much better, if the infrastructure is designed to be real time from the beginning.

//Pasi

11.07.2013 - Root Cause Determinism

Most modern software applications, internal or external to organizations, tend to become highly complex over time, when it comes to physical and logical servers, databases, front ends and other components. In order to troubleshoot these, many companies offer shrink-wrap products that promise to find 'Root Cause' for any current or even new performance and reliability issues.

These products work well for, eh, shrink-wrap applications which also have deterministic, shrink-wrap issues pre-built into the monitoring product's problem database.

Our experience is that even a basic CRM application is usually customized, or built into a slightly different networking environment so that no canned product can find those real performance bottlenecks.

We've always taken a different approach. In cases where our Platform is primarily used for performance tracking, we enable as many data feeds as possible from all software and hardware components in and near the target system. This easily generates few hundred kilobits per second data flow, which is then templated and visualized in real time. All visualization components are modular and thus the performance view can be very quickly adapted to match the application structure.

According to Donald Rumsfeld (who clearly has read his Clausewitz) there are known knowns, known unknowns and unknown unknowns. With application features and complexity increasing it becomes more and more critical to scalably measure and infer the latter two.

//Pasi

12.03.2013 - Pseudoservices

My utility encountered a severe winter storm a few months ago, causing power outages for thousands of homes. Most mobile networks, though, were still up and running as many base stations have battery and generator backups. Many people have smartphones, so they directed their browsers to the utility's website and tried to seek information about the outage and possible repair time estimates. The utility, like most others, has a graphical outage map which was paraded in the media just a couple months earlier. So everything was supposed to be in order.

However, their outage visualization system collapsed after a few hundred simultaneous requests, which subsequently rendered their whole website unavailable.

My first thought was that this is just a sizing and configuration problem. However, when I'm now looking at similar, e.g. energy consumption portals of utilities, a different thought arises. I think these portals have been badly designed on purpose. Giving customers real time information could generate inconvenient questions about the goals, preparedness and real technological level of the utility, so an 'unfortunate' website overload is a good firewall for criticism, at least for now.

This is one of the few places where regulation would help. With these energy prices, customers should get accurate and real time information about their service level and consumption, through systems that can cope with the ways people are now accustomed to. Unsurprisingly, one of the best performing public services is the tax authority, which now sports totally electronic interfaces towards most companies and people. Scalability and user friendliness is there. Yes, when money is collected.

It is now time to get other essential services to the same level. If the tax authority can process 5M records in real time without a glitch, utilities can provide visibility to their services in seconds, without 24h or even one hour delays. Information blackout is not the solution.

//Pasi

15.01.2013 - Big Data Cloud Pioneers (10 + 11)

When NASA launched Pioneer 10 and 11 outer space probes in 1972 and 1973 respectively, local computing and storage were extremely expensive compared to today's resources. That's why it was logical to make them both fully Cloud-controlled, using NASA's Deep Space Network. Their software versions were updated countless times before 2003, when Pioneer 10 finally fell silent due to power constraints of the plutonium-based radioisotope thermoelectric generators, near the outskirts of our solar system. This was some 20 years after their planned lifetime.

The telemetry, radiation and numerous other sensor data amounted to a total of 40 gigabytes for both Pioneers, a formidable amount to be stored on 800 cpi tapes of late 1970s and even on 6250 cpi ones in the early 1990s. An 800 cpi full size tape reel contains a maximum of 5 megabytes.

NASA had no obligation to store 'secondary' data like telemetry, but fortunately one of the original systems engineers Larry Kellogg converted the tapes to new formats every now and then. Thanks to him scientists are still making new discoveries based on the raw Pioneer data. It is also of exceptional value to have it in raw format, as more and more advanced algorithms can be applied.

Today's embedded but cloud-connected environments have a lot to learn from Pioneers' engineering excellence and endurance planning. We just briefly forgot it when it seemed so easy to solve all storage and computing problems with local, power-hungry disks and CPUs.

Pioneer H, a never-launched sister of 10 and 11. Courtesy of NASA

//Pasi

16.10.2013 - Primordial Sea of Data

Hype has it that most organizations must start collecting and storing vast amounts of data from their existing and new systems, in order to stay competitive. In the past this was plainly called data warehousing, but now all major consultants and analysts rave about Big Data.

So what has changed? Apart from IT marketing, not much. Existing databases have been beefed up with enormous hardware and some fresh innovations (like Apache Hadoop) have arrived to try to overcome the limitations of legacy SQL behemoths.

But a database is just a database. Without algorithms and filters analyzing the data, it is just like the primordial sea before any life forms appeared.

In the coming years, nearly everything around us will be generating Big Data, so collecting it to any single location or database will be increasingly hard and eventually impossible. Databases, or the sea of data around us, will be highly distributed and mobile.

When reality soon eclipses the hype in Big Data, it'll be those algorithms and filters, and their evolving combinations which carry the largest value in any organization.

Those entities, which we call Spimes, need a scalable, fault tolerant and highly distributed home. It is also known as the BaseN Platform.

Rising data sea levels

//Pasi

16.12.2013 - Platform Socks

Having EU size 48 (US 13) feet causes some inconvenience as the selection of shoes and especially socks is quite small. Usually the largest sock size is 46-48, meaning that they barely fit and thus wear out relatively fast, probably because the fabric stretches to its maximum.

Socks have had a very similar user interface and sizing for centuries. There are a few providers for custom made socks, but they offer steep prices and still start with an assumption that all feet have similar dimensions.

Looking from our beloved Spime (and my) perspective, socks should quickly be spimified so that each new pair would fit slightly better. They should also be reinforced from the spots the previous pair experienced most wear.

In order to sense wear and stretch, a sock might have a simple RFID tag connecting tiny conducting threads woven throughout its fabric. Each time the tag would be read, it would transmit a 3D map of the sock based on the matrix of broken and still conducting threads, to a reader device in e.g. a mobile phone capable of sending the data further.

As most RFID tags are now mass printed, the additional cost of the sock tag would be hardly more than 15 eurocents, so this would be feasible for most sock manufacturers.

Now add BaseN Platform to host those millions of sock spimes (3D maps, current and historical) and my desired sock service is ready. I would subscribe right away if I could get new, better fitting socks mailed to me just before the old ones start breaking up.

//Pasi

16.04.2013 - Smart Slum

In most energy efficiency and Smart Grid projects and pilots, we see shiny new buildings constructed for wealthy inhabitants, who drive and charge their hybrid SUVs and golf carts using state-of-the-art solar panels on the roof of the building.

This is all nice and convenient, but does not stand for closer scrutiny when it comes to total energy efficiency and carbon footprint of these measures. The small scale solar panels, wind turbines and more complicated construction methods easily generate more carbon dioxide upon their manufacturing than what the building saves during its typical lifetime.

If we're really going to increase the societal energy efficiency, technologies must be designed to scale for millions of people, starting from the lowest income classes. Those smallest student and single parent rental flats should be on the top of the list to reap benefits from smart meters, demand response, time-of-use energy pricing and other services offered by quasi-monopoly utilities.

This is more than doable, given that state and municipal owners of these utilities take some action instead of just collecting nice dividends each year.

Smart and glossy is good but not enough (Image by Skanska)

//Pasi

19.11.2013 - Missing My Code

At 12 I built a rudimentary, 4-relay controller which I attached to my Commodore 64's user port. This allowed me to create irritating light shows with colored bulbs on the relays, in addition to controlling volume and channel selections on a half-dismantled stereo set. Triggering these at times when I was away from home was highly enjoyable - from my point of view, at least.

I was always fascinated by radio controlled (RC) aeroplanes and helicopters, but due to their high prices I was only able to negotiate a couple of simple toy-grade RC cars. Controlling a car wirelessly with a dedicated controller was a fun thought, but turned out to be boring in a few days.

When the other RC car's motor let out the holy smoke (as I had installed an additional, way too powerful battery pack) I was left with one working car and a bunch of spare parts, including an additional radio control set.

Having seen Star Wars, I wanted a thing that could control itself (Yes, I really like(d) R2D2), so I attached the radio set to the relay controller of the C64. Four relays were just enough for forward/back and left/right commands. This initially enabled me to drive the car from the C64 keyboard and save and replay its routes. Cool, but not enough.

Installing the second radio transmitter to the car enabled me to add front and rear collision sensors, made from bent copper. The receiver was connected to the C64 joystick port, as it was easy and fast to read in software.

The end result was a car that was capable of mapping a room and avoiding obstacles by itself. I coded for weeks to make the thing as autonomous as possible, within the constraints of 64 kilobyte memory. Looking at the car I felt it was 'thinking', as the 1 MHz processor took quite some time to iterate coordinates in memory.

It was the coolest thing I had built that far. I do have a few cassettes and 160 kilobyte floppy disks remaining, but I doubt those are readable any longer so the software is probably lost forever. Now 30 years later I'd like to understand my thinking back then.

That software, or the essence of its algorithms, could now be run in the BaseN Platform, with access to terabytes of memory and thousands of processors. It would be the car's spime. And I would make it way cooler that R2D2 ever.

//Pasi

21.02.2013 - Measurable Empathy

When BaseN hires people to any position, half of the interview is always dedicated to an ad-hoc role play where the applicant is presented a scenario with components from past BaseN endeavors. All interviews include two BaseN people, or more depending on the scenario.

Compared to traditional interviews we used to conduct a few years ago, we've concluded that the scenario method yields far more information about the applicants. Most surprising are the otherwise promising candidates who just decline to participate, citing that they would have needed time to prepare. But.. in most of our positions, tasks must be faced without a period of preparation, using available skills. Such an interview ends there.

So what does our scenario model actually measure? After a few tens of interviews and a lot of thinking, I believe that the answer is empathy. People who excel in these can be outgoing or shy, independent or collegial - very different people indeed.

My conclusion is that in our kind of dynamic workplace, empathy and along with it the ability to perform mental scenarios without own prejudices are by far the most important skills people must possess. They will enable people to continuously develop and quickly adapt to new situations, while maintaining a curious mind.

In other words, dreaming is allowed and encouraged - provided that it involves the BaseN Platform during working hours.

Because we're no robots at BaseN: Empathy matters!

//Pasi

24.09.2013 - Welcome to the Spime Farm

During the last couple of years, BaseN Platform has been used to monitor and control an increasing amount of nodes, or devices, outside the traditional telecom and IT realm. As a consequence, we've vastly developed our capabilities of securely hosting complex algorithms which analyze - and, perhaps even more importantly, control things ranging from solar inverters to rat traps.

Fast-forward a few product development cycles of our customer. What do we actually host? The rat trap may send us images, temperature, humidity and olfactory sensor data, while we (the Platform) can issue the lethal blow if, and only if exactly the correct species of rat (and definitely not the rare red crested tree rat) enters the trap.

The trap itself is, actually, an algorithm within our Platform having (somewhat cruel) physical extensions in form of a killing spring and an array of sensors. This kind of physically augmented virtual entity is called Spime, a term coined by author Bruce Sterling in 2004. The point is to emphasize the model over the manifestation.

Spimes, which record and manage the full lifecycle of their physical representations, enable immense efficiency improvements when combined with recycling and 3D printing technologies.

We believe that the role of hardware and software will intermingle toward the spime ideal, and there will be a need to manage untold numbers of variegated spimes-in-the-wild. This is the direction we're heading towards, empowering our customers to devise their own, evolving algorithms for their own spimes. The first step on this road is my.basen, which is our first service that can be fully activated and operated via the web. Customers register, log in, start sending data and create algorithms and actions. All as a service.

//Pasi

26.08.2013 - Energy Trading, BaseN Style

I recently switched our electricity provider and got a new contract that supports hourly pricing, which means that every hour has a different price tag per kilowatt-hour.

Here in the Nordics the electricity market has been somewhat open for the last 15 years, and there's a marketplace called Nordpool where buyers and sellers of electricity trade their kWh's in a stock market - like fashion.

There is, though, one major difference to a real stock exchange: Tomorrow's prices are known today, giving utilities a lot of room to adjust their production to most profitable times.

So without adding any hardware, I decided to create an algorithm that estimates our electricity usage for the next day and picks the cheapest hours for most consumption. This is possible due to our battery bank which can store about 50 kWh and which can support the house for 2-3 days without external power.

I had not reviewed Nordpool prices for a while, so I was quite surprised that my August bill is going to be 40% less than a usual summer month without adjustments. I believe that the difference will be even higher during the winter, when the geothermal pump kicks in and Nordpool prices vary more.

What I'm really excited about is that all this was done outside the power distribution racks, purely in software that lives some 50 kilometers away.

//Pasi

27.03.2013 - RAE - Revenue Assured Engineering

In our latest customer meetings it has become increasingly clear that traditional paradigms for managing extensive network assets are becoming obsolete. This is primarily due to increased competition and subsequent need of simpler, quickly extensible and more efficient networks.

In 2001 Network Operations Center (NOC) people wanted to see bits per second for capacity, drops per second for errors and processor utilization for getting new fancy hardware. A couple of years ago, NOC converged into a Service Operations Center (SOC) which verified services purchased by customers. This still offered a chance of buying fancy Business Support Systems (BSS) with humming servers. Still a paradise for substance (hardware) dependend people.

Today there might be no physical xOC location left, only a group of screens and a few people taking in calls and arranging repairs, implementations and upgrades to the network.

Tomorrow those screens may no longer show bits, bytes or any other deeply technical information. It will be all about dollars (or euros) generated, lost or jeopardized - per second, per purchased asset.

BaseN can currently, by minute, display a geographical map of, say, 3.700 3G/LTE base stations and assign them color status according to the calculated revenue/profit produced by each one. In a negative or zero revenue situation, a base station is either misplaced (in original engineering), misconfigured or malfunctioning. Only this then triggers an engineering action.

This is Revenue Assured Engineering using BaseN Platform, which applies to networks and assets in Energy, Telecom and related infrastructure from which data can be acquired. It starts from the design and simulation table and grows with revenue-generating assets.

Euros flowing nicely

//Pasi

27.05.2013 - Situational Awareness - Critical but not Obvious

A couple of months ago BaseN decided to move to a new mobile operator, primarily due to pricing and flexibility in subscription management. Beside normal employee subscriptions, we also manage quite a bunch of data-only, machine-to-machine -type SIM cards in our various Smart Grid projects.

Our new operator has its own physical network, apart from being a mobile virtual network operator (MVNO) in someone else's network. This physical network also happens to be different than in which we were carried before.

All mobile networks suffer from outages and areas with poor reception and continuously try to improve service quality. While our previous operator had difficulties at one spot on my way from home to work, the new one has three areas where calls are dropped and data service is unavailable.

My commute usually takes about 40 minutes, which I practically fill with sync calls with members of my management team. Having my call dropped three times during an average commute prompted me to issue a problem report to the operator's technical service.

To my delight my case was assigned to a very knowledgeable engineer, who initially had me sign a special troubleshooting permission. After that he surprised me by asking me for the exact times and locations of my experiences of poor network quality. It turned out that he had to order the call logs in question from another part of the organization and analyze them manually. Experienced as he is, he recognized one of the problem spots and mentioned that sometimes those specific 3G base stations won't hand over calls properly.

From systemic perspective it means that this large mobile network does not exhibit Situational Awareness (SA) to its managers, at least what comes to the bulk of revenue-generating services like GSM calls. At many of our Telecom customers we have built the SA view from the ground up, using data feeds available even from the smallest network components. This kind of SA creates a positive feedback loop that is not dependent on just negative customer experience. Furthermore, it promotes engineering talent as more and more people understand and get to solve real problems.

The need for SA may not be obvious, though. In many areas, the Telecom industry is still being managed by individual engineers who thrive from being heroes of the moment when fixing the network on the fly. But that.. does not scale. Without SA, technical development gradually slows down.

SA view BaseN style

//Pasi

27.06.2013 - Cloudwashing

A couple of weeks back in CIRED I participated in a roundtable discussion concerning smart energy usage at home. Although the whole event was primarily geared towards high voltage utility people among large ABB, Siemens and Vattenfall booths, these roundtables inspired many visionary but surprisingly practical ideas.

One of them was the cloud-based washing machine, which I contemplated with Electrolux, the famous Swedish appliance maker. One of their problems is that when manufacturing products with a lifespan of 25+ years, it is increasingly difficult to find e.g. color LCD displays and switch knobs with the same durability and guarantees.

One way to overcome this is to outsource the whole user interface to the customers' device, being his/her phone, tablet or any other connected browser. The physical washing machine just has Ethernet or WiFi connectivity and a simple fail-safe logic that is continuously updated from Electrolux's cloud service.

The user interface (UI) dilemma is now solved and manufacturing costs slashed. This is all good, but suddenly the cloud offers an arsenal of new opportunities - how about measuring water pH, temperatures and outgoing dirt content accurately and adapting the washing program accordingly, using data from millions of connected washing machines? Furthermore, clothes could have permanent NFC/RFID tags informing the machine of temperature and other restrictions. Programs tailored for specific detergents?

The cloud easily has thousand times the computing and storage capacity available, as opposed to a hardened, embedded processor in a household machine. Like Xboxes and PlayStations, these machines need a continuous cloud access with smart offline capabilities in case connectivity is lost. Doable, and this eventually rids us from those hieroglyphic UIs at current washing machines.

...and cloudcooling?

//Pasi

30.01.2013 - Tale of Two Delays

The human mind is very adaptable to the environment and sensory information flow, in order to be able to maximize the processing capacity at any given moment. This has been useful in fight-flight situations during most of human evolutional history.

However, this seizing the moment has its downsides. Longer term wisdom and knowledge are easily sacrificed when day-to-day and minute-by-minute issues are actively being solved while bigger issues remain at large.

When we are faced with severe structural problems like emitting more and more CO2 while damaging the environment, corrective actions are executed very slowly and inefficiently. Why might this be?

I think one of the primary reasons is the blissful delay in real data. For instance, The Intergovernmental Panel on Climate Change (IPCC) publishes reports sometimes years apart, raising awareness (and opposition) for a short period while we continue business as usual during the silence. Just like our stone age brain evolved to do.

The IPCC would be far more interesting and capable if it provided real time measurements and analysis on a minute basis. Any suspicion about doctored results would evaporate, as the raw material would be online for anyone to throw algorithms at.

My postulate (and guiding principle in life) is that a delay is beneficial only when it gives you more time to think. Delaying real time data is closer to self-deception.

//Pasi

08.05.2013 - Measurable Medicine

Next week we'll demonstrate our initial measurements coming from humans instead of those usual machines. This will not be brain EEG yet, but blood pressure and pulse from our booth visitors in TM Forum event in Nice.

Medical measurements have long been governed by strict regulations and common practices, while granularity and long term analyses have been primarily tied to university level research, as I contemplated in a previous blood pressure blog entry.

The granularity of medication has also remained stable for tens of years, with most pharmaceutical companies manufacturing 2-4 different sizes of tablets and other types of doses.

People vary in size and metabolism a lot. When we have accurate measurements from the body, dosages can also be adjusted individually and treatments can be timed exactly based on present antibodies and pathogens. This would be especially beneficial with antibiotics, overuse of which is a growing problem.

You're heartily welcome to our booth - just prepare to be measured. But don't worry, this time results will be anonymized.

//Pasi

01.09.2014 - Echoes from Natanz

This morning I spotted an alarm from our heating system, brought to the mobile view as minor/informational and triggered by BaseN Platform trend analysis. Usually it would have gone unnoticed, but when I use the biomass furnace I tend to check temperature deltas in the morning, to see if more wood should be added.

The alarm was caused by a sudden drop in heating power, reported through incoming and outgoing temperature sensors connected to the floor heating pipes. Closer look indicated that between 4am and 5am the heating valve was fully closed for 50min, after which it regained normal operation.

As the house has more than 40 tons of concrete, this caused no noticeable indoor temperature changes or other issues. However, I don't believe in ghosts so I wanted an explanation.

It turned out that this phenomenon happens every year around August, when night temperatures drop below 10 degrees, or 10.7 degrees, to be exact.

While the heating controller is connected to the BaseN Platform, it has its own embedded program to control the heating valve using a predetermined graph, issuing pulses to a stepper motor that is physically adjusting the valve. The graph is based on outdoor and heating water temperatures and compensated by indoor temperature.

Now at 10.7 degrees, this Negative Coefficient Thermistor (NTC) sensor happens to report a resistance that is converted by the embedded controller to.. Zero. While I don't have the source code for the embedded program, I am relatively sure that we're seeing a classical Divide-by-Zero bug, causing the program loop to abort and instruct the stepper motor to close the valve. At 10.8 or 10.6 degrees everything is fine.

While I'm happy that this is not a nuclear reactor cooling system or a logic controller managing centrifuges, this clearly highlights the problem of embedded programming which is not continuously verified and measured. Isolation does not bring security, it rather conceals potentially catastrophic bugs. This is why we advocate thorough data collection, and ultimately Spimes with multilayer, transparent structure.

//Pasi

02.10.2014 - Citizen Spime

Governments here in Finland and elsewhere are often struggling with new e-services offered to citizens. From social security projects like Obamacare to electronic voting, we've seen millions of euros spent to systems ultimately rejected by end users due to performance and complexity issues.

My fancy national identity card does include an X.509 certificate, but I'm one of the very few people who actually uses it to access government services, when the authentication mechanism happens to work. (Yes, I've filed more than ten fault reports.)

Be as it may, offering a new web/mobile service to even just 5 million Finns is a daunting task, requiring careful long term planning what comes to scalability, fault tolerance and forward compatibility.

The BaseN way to support the proliferation of these important services is to model each citizen interaction and produce rigorous measurement data from all transactions, across all government e-services. This data is then analyzed in real time and presented - also in public - as a digital proof that the service is performing as expected.

For example, an e-recipe system should record every authentication, data retrieval, display and creation event with millisecond accuracy. This can be turned into real-time analysis, presented as a dashboard for all users, much like large factories display their work safety figures (no accidents for n days etc.) at their gates. It is time taxpayers get real data how the systems they fund actually work.

Improving the quality of these systems as data sources will be of paramount importance when governments start entering the Spime world, especially in public healthcare applications - the low hanging fruit in reducing government expenditures and improving national health.

//Pasi

03.05.2014 - Hairspime

The Internet of Things is often confined to lifeless physicality, which greatly reduces its conceivable applications. Things, anyway, rarely used to have true intelligence outside horror movies.

Spimes will remedy this by bringing the logical brain to any thing physical. However, since we're quite used to the nature of our current thingverse around us, new spime concepts require new kind of thought processes.

Take your hair. Most of us tend to it regularly, with substances like shampoo and conditioner. These liquids (or sometimes even powders) are big business, advertised by the top actresses at bus stops, in magazines and countless other places.

Each of us have a different hair, so there's a large selection of shampoos even at a local store, and thousands of choices online. How do I know which one is good for me? Well, this far it has been trial and error.

What I've learned in different parts of the world is that water quality, mainly hardness (calcium content) usually has a bigger effect to the wash result than the shampoo used. However, unlike in coffee makers, this is hardly ever contemplated by the shampoo manufacturers.

Enter hairspime. I'd like to get a hairbrush with accelerometers, pressure sensors and a water quality sensor, all common and cheap pieces of electronics. The water data, my brush movements (entanglement of my hair) and hair hardness is transferred to a spime which then selects and delivers the correct shampoo formula for me, automatically by mail.

And we'd have another SaaS, Shampoo-as-a-Service. Because I'd be worth it.

//Pasi