Current consumer hype about AI, specifically ChatGPT, shakes many tech giants' status quo.
Microsoft had its hand shoved early into the technology company making this quake. And it quickly stepped up to secure its seat by setting a product vision and letting people know.
Whereas Google, believed to have owned a better tech, scrambled to set a seat in the hype. But the highly expected demonstration seems to be arranged haphazardly, falling short of enthusiasts’ expectations. In my opinion, it is not technology failure but marketing.
This is the era of the social-fueled-attention-grabbing-to-make-money era.
Many copywriters, click-bait writers, genuine journalists, and interested users are all directed toward writing about ChatGPT of how to use it for various use cases. They rush to write about new ideas to stand out, avoid being left out, and stay relevant.
They may be all genuinely interested in the new tech, or they are just following the crowd fueling the hype further.
ChatGPT, on the other hand, is a nicely done result of skilled craftsmanship on machine, algorithm, and data combined. It is still a beta program that is easier to understand and useful for most users.
GPT has been there for a while, and many Machine Learning Models have improved in multiple iterations. Various tech startups are providing smart services using the GPT at the back. There are products like GitHub Copilot using GPT’s Codex Machine Learning Models.
ChatGPT is the direct consumer product from the GPT creator itself, revealing its power in a most accessible form for most end users with a breadth of knowledge base more enormous than many of us individuals.
Most of the revolutionizing technology companies started with genuine good-for-humanity visions. No doubt about it. But all these visions never matched against and could not resist the power of money in capitalism.
The stakeholders of many technological advancements, looking at ChatGPT in this case, can be generally grouped into three tiers.
The first tier is the people who own the computer machines and have vast data access. And the people with knowledge of algorithms and skills in making the machines learn from those data, in short, capitalists and technologists.
The second tier is a group of woke power users and clever business owners with technical and business knowledge and skills to use those learning intelligent machines. They know how to create business value out of it. They know enough of the pros & cons and risks of using it in their day-to-day work.
The third tier is the people who do not know how it is done and are unaware of the risk. But they find the product helpful and enjoy using it. They are the end consumers.
The first tier group could have done so much to make the machine learn up to this point. But their money and skills are limited by the data and computing power. Human supervisors are required to drive these machines further to learn next level.
The next level could only be unlocked by making machines learn from humans. This leads to the question of how to make a bench of humans let the machine learn from them willingly for free.
The unsuspicious human supervisors
Machine Learning models need supervision from human input and calibration via a feedback loop into the algorithm. For the AI to advance next level, the third-tier crowd has the collective power to make it happen.
With the conversation interface pretending to be a human alike response and fancy enough for human delight, millions of humans will interact with the Machine Learning model in billions of conversations. Based on what the human asks, argues, and responds to, it receives the input into feedback loops to learn further and improve the product.
The better product is definitely better for the users.
The Elephant in the room
The lesson from the recent past. We fall into the trap of being tracked for our behaviors on the internet of what we search, our opinions, and who we interact with. We got profiled, classified, and sold for commercial purposes for convenient and free services.
In this case of using a system like ChatGPT, the product is positioned so well that we are providing intelligent conversations as feedback to make the system better for free and as well, probably (probably), got profiled and sold soon.
The overthinking of a techno-capitalist dystopia worrier
Here is the first risk.
With a service like ChatGPT, corporate get a single pipeline to collect data for one’s search and questions. Multiply that by hundreds of millions. The advanced thinking process of the users, both rational and irrational, is exposed over the conversation thread. We never know if the users are being profiled and classified for future commercial use under the name to serve the users better.
Another risk is we never know if the AI output (Machine Learning models’ predictive generation) was post-processed to mix with advertising and product information bias. Technically it is possible, and they probably have done that too. This would convince the unsuspecting non-critical users to sway toward the biased answers. This will not be as obvious as the clear Ad visuals we are already immune to in search result pages. This is the case of users’ belief (misbelief) of getting a fair and honest answer in the system could be exploited.
The last risk is a controversial and debatable topic. Having such technology, artificial intelligence, deduced from the collective knowledge of individual human and business entities' contributions under the control of a small group of human shareholders is a risk. This could end in both good and bad consequences. The decision-makers are human, after all, and are susceptible to fear and greed.