Interesting Intel Partnership to Improve Wi-Fi Has Potential for Much More

This week, Intel, Wyebot, and Ambeent announced a partnership to use the telemetry from laptops to improve Wi-Fi performance. It is an interesting idea because laptops are generally the largest data users on a wireless network, they tend to be Wi-Fi-only (unlike smartphones which can also connect via WAN), and laptops tend to be stationary while in use, making them a better sensor platform than something that is more mobile. The data pulled from the laptops in this implementation would help network administrators to better locate hotspots, potentially help identify rogue hotspots and other wireless security issues, and better optimize existing Wi-Fi networks.  

But I think the concept of using laptops and even smartphones as sensor platforms goes beyond wireless networks because both have a host of sensors which often include microphones, cameras, thermal sensors, drop sensors, and sometimes even barometers

As we move to smarter buildings, cities, and countries, a great deal of the cost is setting up sensor networks, but what if you could instead use existing sensors like those on our personal computers. Couldn’t we then get to the benefits of smarter everything more quickly?  

Let’s explore that idea this week.  

Sensors everyplace

The reason we need a lot of sensors is because we would like to optimize home, office, and city services based on the need to reduce waste while increasing the quality of the related services. Smart things are supposed to give us better experience at lower cost by focusing on efficiency, but to make the resulting systems most efficient, we need to capture, report and act on location-based data so that increasingly automated systems can adjust based on loading and need.  

Both laptops and smartphones have hosts of sensors that are mostly underutilized but have the potential to provide much of what we need. For instance, to optimize traffic, a city could anonymize and aggregate location-based information from smartphones coupled with historic behavior to account for both recurring and one-time traffic events by adjusting traffic light sequences and, when possible, adjusting lanes on streets and freeways to address the resulting congestion in real-time. 

In case of a disaster, sensors could help identify citizens in distress through the sounds the microphone is picking up, identify safe zones where people can aggregate, and then direct people at risk to those zones to keep them safer. This would include assuring that first responders go first to locations where people appear to be the most at risk.  

Shock sensors are being used already in smartwatches to identify falls, but they could also be used in a similar fashion with smartphones and, when people are carrying laptops, to identify a potential accident or fall and alert for both more quickly to prevent serious injury.  

In the case of a mass shooting, which seems to be a regularly occurring event in the U.S., triangulating the sounds of the shots through microphones in the smartphones or laptops could help identify where the shooter is, and then provide the device user with directions to get out of harm’s way. And they could auto-trigger cameras while alerting the user so they know their camera has been turned on and they need to point it at the threat so that responding law enforcement can mitigate the threat more efficiently and with a higher degree of officer safety.  

With enough people filming a crime, investigators could build a simulation of the event in the metaverse to both determine guilt and then to successfully prosecute the perpetrator. Granted, this would all need to be done with consent and after processes were put in place to assure the privacy and safety of the device user weren’t needlessly compromised.  

Even as we build out Earth 2, the massive simulation for better weather analysis and reporting, these mobile sensors could be invaluable to the identification of anomalies and to better capture existing weather events that would feed the simulation and make it more accurate without having to install massive numbers of stand-alone sensors.  

Wrapping up:

We have a ton of sensors in our personal devices that are under-used but could be used to reduce our utility costs, provide better information on people’s movements to better protect them, better optimize the services around them, and to create both a safer and more efficient overall environment without as great a need to install additional sensors. 

Intel, Wyebot, and Ambeent have pointed the way to this capability with their initial effort that is focused on optimizing Wi-Fi. But there are substantial additional potential benefits from using in-place sensors. While I’ve mostly spoken about laptops and smartphones, sensors are in connected appliances, connected (mostly electric) cars, and even in our smartwatches.  

Were all these sensors put into use effectively, we could do a better job of reducing financial and temporal waste, while making us and those we care for far safer as well. As we move to more and more autonomous machines, these sensors could help prevent accidents between pedestrians and cars and get first responders to people in distress far more quickly. In short, I think this Intel partnership is a great first start, but the true benefits for a solution like this will come when these connected devices are used more aggressively as sensors monitoring, and reporting on the world around us.