BaseN

14.02.2011 - The Immaterial Phone

On Friday Nokia announced that they'll introduce a third operating system - Windows Phone 7 - to some of their high end devices. Compared to serious competition which has either Google Android or Apple iOS, I find this move puzzling. How does one keep up with the evolution of new features when supporting three different OS's?

I've been a Nokia phone user since my Mobira Talkman in the late 1980s, mainly due to Nokia's way of providing me with increasing capabilities at a steady pace. With today's N900 I am able to use every IT system of our company, with an encrypted VPN. In addition, the underlying Linux allows for full scale administration access to our BaseN Platform, a feature not often used - but the capability is there.

However, I've always found changing to a new phone troublesome and time consuming. Calendar, address book, applications and other settings need to be synchronized and this rarely succeeds well with the provided migration applications.

By introducing an immaterial phone, Nokia could, with its strong roots in telecommunications, become a game changer once again. Imagine if the software of your phone would actually exist primarily in the computing cloud, decoupled from hardware? You could then invoke it with a web browser, tablet computer or in a 'surrogate' phone whenever necessary - your settings always being in sync. Upon hardware failure or loss, you would just tell the cloud service the serial number of your new hardware and in a few moments your environment would return. Or you could have multiple synchronized phones.

This would also allow for novel ways to interact with your operator - with IP coverage (WLAN hotspot for instance), all GSM/CDMA traffic could be transported over it, altering the international roaming cartel quite radically. Protocols and technologies for this are quite ready, as we can see from growing deployments of Femtocell base stations connected to consumer broadband.

The existing Ovi portfolio, in parallel to similar services at Google and Apple already offers data storage and backup services, but their success has been limited as they've been marketed only as add-on features. Game changing would require a whole-hearted effort to introduce a truly new phone concept. Someone will do it sooner or later, and I'm hoping it'll be Nokia.

//Pasi

06.08.2013 - Technological Wasteland

Intelligence gathering and services have existed since the earliest civilizations. Until now, however, those have been relatively expensive, requiring a lot of manpower and structures.

Todays's Internet, in turn, provides a very affordable, virtually free data warehouse for intelligence operatives. Recently revealed PRISM and XKeyScore probably carry a combined price tag with a smaller number than a single foreign country unit in the 1980s. Echelon, which caused some stir in the 1990s, was still a formidable investment.

Although I feel strongly about a small government, citizen privacy and free speech, these are in my opinion not the biggest issues when it comes to this massive surveillance of foreign countries, their companies and people - I'm concerned about innovation on the global scale.

If all (national) research and development data is immediately available to a foreign intelligence agency and their contractor companies, it is just a matter of time when groundbreaking innovations and discoveries start to happen in a country that has its own RnD plus everything from, say, a friendly European Union. And I'm not talking about state-industrial espionage which is illegal per international agreements. I'm talking about young people becoming subject matter experts in those vast intelligence organizations, taking next steps on their careers as engineers and inventors at existing and future companies.

My postulate is that a country, or a continent which becomes a technological wasteland steadily degrades into a problem and conflict area. When millions of young people tell that their dream is to work for the government as nothing else pays off (as happened in Egypt), a conflict is at the door.

In the US, National Security Agency (NSA) also has the responsibility for supporting US companies in keeping their confidential data safe. They've introduced eg. Security Enhanced Linux modules, SHA-2 encryption algorithms and a lot of training for technology companies, both private and state-owned.

Here in the EU, we urgently need to educate our research institutes and companies to excel in strong data encryption and related security technologies. Keeping data private should be a citizen skill, fostered by the government. If and when we are to maintain our innovation capabilities.

//Pasi- PGP Public Key 551D0D20

11.06.2013 - Consumption.. Now

People in intelligence branches tend to say that the value of reconnaissance data decreases exponentially by its age. Although historical analysis is important, having real time data available is critical for evolutionally better decision making. Human brain's short term memory just drives our primary cognitive functions.

We're experiencing the same inflation when our utilities present us with 24-to-48 (or sometimes up to two months) delays in energy and water consumption data. This data is no longer actionable and usually just makes us feel guilty - for a while, as the short term memory has an efficient garbage collection system.

Last week we were discussing with one of the largest local utilities about our services and their relation/integration to the existing Automatic Meter Reading (AMR) and Meter Data Management (MDM) systems. When we inquired about their near future requirements and wishes, the initial answer was somewhat startling:

"We'd like to know how much electricity our customers.. are using.. Now."

Well that's something we deliver by default. The current AMR/MDM structures are too often bound to legacy billing cycles of 1-2 months. Smart meters can and will do much better, if the infrastructure is designed to be real time from the beginning.

//Pasi

11.07.2013 - Root Cause Determinism

Most modern software applications, internal or external to organizations, tend to become highly complex over time, when it comes to physical and logical servers, databases, front ends and other components. In order to troubleshoot these, many companies offer shrink-wrap products that promise to find 'Root Cause' for any current or even new performance and reliability issues.

These products work well for, eh, shrink-wrap applications which also have deterministic, shrink-wrap issues pre-built into the monitoring product's problem database.

Our experience is that even a basic CRM application is usually customized, or built into a slightly different networking environment so that no canned product can find those real performance bottlenecks.

We've always taken a different approach. In cases where our Platform is primarily used for performance tracking, we enable as many data feeds as possible from all software and hardware components in and near the target system. This easily generates few hundred kilobits per second data flow, which is then templated and visualized in real time. All visualization components are modular and thus the performance view can be very quickly adapted to match the application structure.

According to Donald Rumsfeld (who clearly has read his Clausewitz) there are known knowns, known unknowns and unknown unknowns. With application features and complexity increasing it becomes more and more critical to scalably measure and infer the latter two.

//Pasi

12.03.2013 - Pseudoservices

My utility encountered a severe winter storm a few months ago, causing power outages for thousands of homes. Most mobile networks, though, were still up and running as many base stations have battery and generator backups. Many people have smartphones, so they directed their browsers to the utility's website and tried to seek information about the outage and possible repair time estimates. The utility, like most others, has a graphical outage map which was paraded in the media just a couple months earlier. So everything was supposed to be in order.

However, their outage visualization system collapsed after a few hundred simultaneous requests, which subsequently rendered their whole website unavailable.

My first thought was that this is just a sizing and configuration problem. However, when I'm now looking at similar, e.g. energy consumption portals of utilities, a different thought arises. I think these portals have been badly designed on purpose. Giving customers real time information could generate inconvenient questions about the goals, preparedness and real technological level of the utility, so an 'unfortunate' website overload is a good firewall for criticism, at least for now.

This is one of the few places where regulation would help. With these energy prices, customers should get accurate and real time information about their service level and consumption, through systems that can cope with the ways people are now accustomed to. Unsurprisingly, one of the best performing public services is the tax authority, which now sports totally electronic interfaces towards most companies and people. Scalability and user friendliness is there. Yes, when money is collected.

It is now time to get other essential services to the same level. If the tax authority can process 5M records in real time without a glitch, utilities can provide visibility to their services in seconds, without 24h or even one hour delays. Information blackout is not the solution.

//Pasi

15.01.2013 - Big Data Cloud Pioneers (10 + 11)

When NASA launched Pioneer 10 and 11 outer space probes in 1972 and 1973 respectively, local computing and storage were extremely expensive compared to today's resources. That's why it was logical to make them both fully Cloud-controlled, using NASA's Deep Space Network. Their software versions were updated countless times before 2003, when Pioneer 10 finally fell silent due to power constraints of the plutonium-based radioisotope thermoelectric generators, near the outskirts of our solar system. This was some 20 years after their planned lifetime.

The telemetry, radiation and numerous other sensor data amounted to a total of 40 gigabytes for both Pioneers, a formidable amount to be stored on 800 cpi tapes of late 1970s and even on 6250 cpi ones in the early 1990s. An 800 cpi full size tape reel contains a maximum of 5 megabytes.

NASA had no obligation to store 'secondary' data like telemetry, but fortunately one of the original systems engineers Larry Kellogg converted the tapes to new formats every now and then. Thanks to him scientists are still making new discoveries based on the raw Pioneer data. It is also of exceptional value to have it in raw format, as more and more advanced algorithms can be applied.

Today's embedded but cloud-connected environments have a lot to learn from Pioneers' engineering excellence and endurance planning. We just briefly forgot it when it seemed so easy to solve all storage and computing problems with local, power-hungry disks and CPUs.

Pioneer H, a never-launched sister of 10 and 11. Courtesy of NASA

//Pasi

16.10.2013 - Primordial Sea of Data

Hype has it that most organizations must start collecting and storing vast amounts of data from their existing and new systems, in order to stay competitive. In the past this was plainly called data warehousing, but now all major consultants and analysts rave about Big Data.

So what has changed? Apart from IT marketing, not much. Existing databases have been beefed up with enormous hardware and some fresh innovations (like Apache Hadoop) have arrived to try to overcome the limitations of legacy SQL behemoths.

But a database is just a database. Without algorithms and filters analyzing the data, it is just like the primordial sea before any life forms appeared.

In the coming years, nearly everything around us will be generating Big Data, so collecting it to any single location or database will be increasingly hard and eventually impossible. Databases, or the sea of data around us, will be highly distributed and mobile.

When reality soon eclipses the hype in Big Data, it'll be those algorithms and filters, and their evolving combinations which carry the largest value in any organization.

Those entities, which we call Spimes, need a scalable, fault tolerant and highly distributed home. It is also known as the BaseN Platform.

Rising data sea levels

//Pasi

16.12.2013 - Platform Socks

Having EU size 48 (US 13) feet causes some inconvenience as the selection of shoes and especially socks is quite small. Usually the largest sock size is 46-48, meaning that they barely fit and thus wear out relatively fast, probably because the fabric stretches to its maximum.

Socks have had a very similar user interface and sizing for centuries. There are a few providers for custom made socks, but they offer steep prices and still start with an assumption that all feet have similar dimensions.

Looking from our beloved Spime (and my) perspective, socks should quickly be spimified so that each new pair would fit slightly better. They should also be reinforced from the spots the previous pair experienced most wear.

In order to sense wear and stretch, a sock might have a simple RFID tag connecting tiny conducting threads woven throughout its fabric. Each time the tag would be read, it would transmit a 3D map of the sock based on the matrix of broken and still conducting threads, to a reader device in e.g. a mobile phone capable of sending the data further.

As most RFID tags are now mass printed, the additional cost of the sock tag would be hardly more than 15 eurocents, so this would be feasible for most sock manufacturers.

Now add BaseN Platform to host those millions of sock spimes (3D maps, current and historical) and my desired sock service is ready. I would subscribe right away if I could get new, better fitting socks mailed to me just before the old ones start breaking up.

//Pasi