Samsung has actually been doing something along these lines for a while now with their ecosystem. For example, with Samsung DeX, you can project your phone to a TV or a monitor, control multiple devices from one keyboard and mouse, drag and drop files between them, and generally blur the line between phone and desktop.
Microsoft also has some steps in this direction, like Windows integration with Android phones via the âYour Phoneâ app, letting you access texts, notifications, and even run certain apps from your PC.
The tricky part for Google is that doing this natively across all Android devices is a much bigger challenge. Phones, tablets, foldables, Chromebooks, and who knows what else all running the same seamless interface â is a huge technical and UX task. Unlike Samsung, which can control the hardware and software tightly, Google is playing with a massive, diverse ecosystem.
So the real question is whether Google can really solve fragmentation at scale. Even if they manage, the PC market is deeply entrenched, and Microsoft, macOS, and even ChromeOS already have loyal users. I think the winners might be users who want flexibility across devices, but itâs less clear who loses maybe some of the smaller OEMs or software platforms that canât keep up with a unified Android experience.
Computing is moving away from a single device form factor. Phones, tablets, PCs, foldables, and wearables are just the start. Googleâs vision of Android on any device is part of this trend. Expect devices that are context-aware your laptop, phone, and smart screen working seamlessly, shifting the interface and power where you need it.
The heavy lifting is moving to the cloud and edge servers. Your device wonât need to be insanely powerful locally; itâll stream processing from nearby servers. This will let even lightweight devices run complex AI, 3D rendering, or real-time simulations.
AI is becoming the default assistant, co-creator, and optimiser. Think intelligent coding, image and video generation, predictive interfaces, and deeply personalised workflows. Devices will anticipate what you need before you even ask.
AR and VR are slowly maturing. Instead of âscreens,â weâll interact with layered realities, whether for work, collaboration, or entertainment. Mixed reality could replace laptops in some scenarios entirely.
5G, soon 6G, Wi-Fi evolution, and mesh networks mean devices everywhere are connected and synchronised. Computing isnât tied to a location; itâs fluid, distributed, and persistent.
For specialised tasks cryptography, simulations, materials science quantum and neuromorphic computing may redefine whatâs even possible. For regular users, this will translate to faster, smarter applications without needing to understand the tech behind it.
Future computing isnât just about speed. Power-efficient chips, reusable hardware, and cloud-based optimisation will matter as energy consumption becomes critical.
My own vision - an AI that grows with you.
I just want a mobile device that is connected to AI and learns my habits.
If youâre ever seen Mrs. Davis then I suggest watching it - itâs a great show. Itâs a bit over the top in terms of how itâs there - but itâs close to how I would want it.
Timetables, maps, traffic updates, news, sports, whatever you ask for it learns and grows and knows your habits, sets alarms - wanna book flights for a holiday just say âBook flights to Las Vegasâ and presented with options and can book on your behalf. Present tickets at gates etc.
Only downside, what if it breaks? Then I would say youâd have a backup smaller version that you switch on in event of emergency loss or breakage, and it works the same way, once on it places the order for the parent device and arranges delivery to the location where you will be.
I think AI will drive the next gen - apps will be a thing of the past. Or at least the downloadable, the APIs and backend would still function.