This leads to the next step in data-analysis complexity
Correlating multiple data streams. This represents not just an improvement in tracking, but a fundamental shift. A well-trained algorithm that looks at all three of those data streams can tell you not just what you did, but how it affected you, and by extension, what you should be doing (and avoiding) for maximum benefit. In some situations, it can even warn you of impending cardiac events.
It’s debatable whether the expansion of sensor types or the improvement in ML algorithms will ultimately have a greater impact. But there’s not much question as to which technology is most mature in combining the two: video tracking. So much money and effort has been thrown at video data at this point that facial recognition is the subject of legislation, and just about any moving object can be a candidate for self-driving.
The ubiquity of video tracking evolves data analysis even further, by allowing us to derive information from a camera and correlate it with data from other sensors. Peloton’s new “Peloton Guide” platform boasts camera-based motion tracking to guide users through strength-building workouts, correlate the collected data with heart rate through a wearable, and offer suggestions on what workouts to do next.
That’s just scratching the surface too: imagine what could be achieved in a retail space, factory, or home by identifying and tracking the movements of people (or pets) and correlating them with sales, production rate, time of day, or dozens of other metrics.
But remember, for all the promise of this new IoT and ML tech, we’re still looking for the same thing that the door switch on my childhood home offered:
This realization might bring some lofty predictions about the future of tech crashing back to earth, but in fact, the challenge of sifting useful results from mountains of input has been around for decades. It’s certainly something
To produce an outcome that’s useful and meaningful to people, you still have to build empathy with the users and understand the context (human, commercial, and technological). You still have to experiment with the available technology, to understand what it’s capable of and how it might integrate into existing products and experiences. And you still have to run pilots—lots of them—in order to observe whether all this technology is actually delivering value to the users.
Cool tech, even if successfully implemented, isn’t successful if it doesn’t produce value or meaning…and users are the final arbiters of that. It might sound odd to say that empathy and iteration are the crucial tools for optimizing new advances in IoT and ML, but this is still tech in the service of people, so it has to be human-centered.
In the longer term, we can also expect more complex systems, where the insights developed by your data-powered algorithms aren’t delivered to people, but to other devices or platforms. That certainly changes the constraints of your output design, but in the long run it’s just an additional layer on a familiar cake. No matter how many algorithms are handing off insights to devices that are acting on a person’s behalf, the person’s experience is still what defines success. That’s a truth that hasn’t changed, whether the data is being gathered by a multi-million dollar video tracker, or a switch on the door of house in Cape Cod.