Smartwatches are seeking independent reasons for existing in a finicky consumer atmosphere where years of being asked to constantly buy next generation products has led to “upgrade fatigue.” Consumers don’t want new versions of old products, they want new functionality, promises, and something to be excited about. With each expected generation of innovation consumers are almost demanding to be wowed. It’s possible, but it takes work and a lot of effort. Smartwatches are undeniably here to stay, but they need to learn a few more tricks before the general public takes them as being more than just small, wonderful gimmicks.
Even though the watch industry where I earned my wings can often hate me for being enthusiastic about smartwatches, I’ve been among the segment’s biggest fans, especially at a time when it was chic to bash smartwatches in the tech media community. I was even hired by Samsung to host the launch of a smartwatch product press conference in 2016. The reason I always wanted smartwatches to succeed is because I knew prior to technology being surgically implanted on our body, we would be wearing it, not carrying it around. The wrist just happens to be a very convenient place to store things and look at with your eyes while doing a great number of tasks. An advantage for smart, connected devices is that a watch on your wrist physically touches your body and is located within your very personal bubble.
I would also venture to say that unlike a mobile phone, a wrist-worn connected device can hear pretty much what your ears can hear. In theory with better hardware it could hear even more. The point I am coming to in this article is that I want to propose the idea that by constantly listening to your surroundings, a smartwatch can fulfill a new and as of yet under-utilized purpose. That purpose is to use machine learning to allow software to recognize what a user is currently doing, all the while offering information or an experience that is relevant to that specific contextual situation. In very simple terms, a watch that hears you are outside (by listening to sounds it has learned) might display a particular watch dial better suited to being readable in a high-light environment. Conversely, while exiting a building at night into the outside, a watch dial might offer an automatically activating backlight in order to offer darkness visibility. This would be because the watch might recognize simple noises such as evening insects or less traffic as being an indicator that it is likely nighttime.
In my opinion, “audioscape” learning via a listening device transmitting audio to an off-device cloud processor along with contextual targeting of what the user is then doing, could allow for a smartwatch to do a range of increasingly interesting and useful tasks without any user interaction. More so, I make the assumption that the more your device knows about what you are currently going, the more it can be helpful to you when you glance at it. In fact, you may come to learn that in various contexts the watch will offer you then-relevant information, such as duration of exercise, the names of the people in the room with you, and possibly if your breathing (or lack there of) indicates need for medical attention.
Right now smartwatches have a good assortment of sensors in them, including microphones. Anyone who knows about audio recording equipment knows that the microphones in such small devices are typically rather rudimentary, but with attention and improvement, they can get better. Already, all the major smartwatch platforms (among other devices) will do relatively accurate things if you speak to them. Talking to our devices (not just in them) is becoming more popular and the technology to transmit sound in almost real time seems to be relatively well developed thus far.
Other sensors in smartwatches include ones that measure motion, direction, and barometric pressure together with GPS and in some cases a heart rate monitor. This latter sensor is among the most complicated it seems, and the one that is considered to be the most promising by the medical community (as well as insurance companies). That is because it is able to measure vital signs, and I think most people agree that there will be a lot of investment into smartwatches that can effectively collect a variety of vital signs and other collected data about someone’s body. I think you might want to include monitoring how someone sounds to that list of vital and personal medical status signs
More generally I think that listening smartwatches will be able to learn what we are currently doing, and from there software developers can imagine what users will want to see. Imagine that you know a user is standing in front of the mirror in their bathroom. The particular sound of the light switch turning on and footsteps signal to the always-learning software that the only room this could be is the user’s bathroom. After being in here dozens of times, the user finally confirmed when asked “is this your personal bathroom?”