I was in a debate last Friday on the technologies we thought would have the greatest impact in the 21st century. My choice was AI and, this week, Microsoft pretty much proved me right. At its annual Build conference, Microsoft introduced two major efforts. One was a new PC configured with four processors called Project Volterra. It’s impressive, though initially it’s focused on creating ARM-native applications so that ARM PCs (the new computer is ARM-based) can perform at their full potential instead of being held back by the current required use of an X86 emulator. The four-processor configuration is new and consists of a CPU and GPU from Qualcomm, while the Neural Processing Unit (NPU) and Azure Computing Unit (ACU) are unique additions from Microsoft.
The other major announcement, highlighted by an offering called GitHub Copilot, was a massive pivot for AI to be focused on making developers more productive and doing things you and I would like to be done by AI instead of delegating AI power to sales or weapons where a lot of the current focus unfortunately resides
Let’s talk AI this week.
GitHub Co-Pilot is one of those tools that could truly change the world because it potentially makes coding available to people who don’t have much coding experience, and it is evolving into a tool that could work with no coding experience.
Currently, it can look at the code you’re writing and, like autocomplete for words, autocomplete the line of code once it figures out what you want to do. When coding, there is lot of work that is largely repetitive and has to do with just getting access to the data or providing a UI (user interface) for the application. Much of this can be done by an AI because it’s simply repetitive and follows predictable paths. Co-Pilot is a critical step to eventually creating AIs that could do most coding.
Microsoft isn’t the only company working on this, Google’s DeepMind team has demonstrated something very similar, but Microsoft is making its offering available this summer to developers. The positive productivity impact should be significant.
But in Build Afterhours there were some even more interesting AI applications. For work, it was kind of a magical marketing tool that would write a paragraph of collateral after just seeing the four or five words. Now I’m pretty sure you’ll also need a plagiarism detector as part of this effort because if almost everyone begins using this tool there will be a tendency for the AI to craft nearly identical paragraphs from near-identical common initial marketing sentences. But we’ve had plagiarism detectors for years and they shouldn’t significantly hamper releasing an offering like this.
Even more fascinating was a project code named “Doll-E”, pronounced Dolly, that was created partially through the OpenAI Consortium. It consisted of a classroom of young kids, third or fourth graders who would describe the invention they’d like to build to help improve the world. Doll-E would then create a compelling picture, cartoonish at this stage. Tools like NVIDIA Canvas could be layered on to transform that image into something far more photorealistic. It was fascinating to watch these kids describe a concept and then see what looked like an artist’s rendering of it.
They also showcased another codex AI application where they queried the web, the application found the best-related article, identified the key sections tied to the query, then summarized the answer far more completely than any Digital Assistant is capable of today. And finally, they showcased how an AI could improve Minecraft by writing code invisible to the user that gives the game more capability depending on what the user wants to do. This last showcases how you might turn a user into a coder without that user realizing they are writing code. That is some crazy future stuff right there. It suggests a future where the best coders never write a line of code, they are just best at communicating with the AI that does write the code. One thing for sure, in the future, a lot of us will need to improve our communications skills.
While Microsoft had a lot of interesting things at its Build conference this year, the two things that jumped out the most were the revolutionary new hardware project for ARM developers initially called Project Volterra which will have substantially enhanced AI capabilities.
The second was AI advancements that could make all of us more productive, classrooms more interesting, and begin to craft the next generation of human and machine interfaces based on natural language that will reward those who can communicate accurately, completely and with high integrity.
In the future, we’ll talk with our computers. At Build this year, Microsoft showcased significant progress in turning the user experience from requiring us to learn how to work with computers to creating a massive ecosystem where computers must learn to communicate with us. The highlights of Build ranged from these hardware and AI announcements and demonstrations to a massive influx in collaborative capabilities designed specifically to help remote workers. If you missed the show, you could still watch the videos.