OpenAI’s ChatGPT technology has been the talk of the town, and its potential seems straight out of science fiction.
Amazing art and complex writing being created without humans – so far so cool.
But have you realized the privacy issues that changing technology can create?
This technology, along with competitors such as Google’s Bard, is attracting millions of queries and searches every day. ChatGPT alone had reached over 100m users by January 2023, making it the fastest growing app ever.[1]
It will quickly become a major part of how we use the internet. Microsoft has built ChatGPT into its Bing search engine. Some “generative AI” programs can write computer code. And the next time you want to complain about work, the question can be solved, at least partially, by using AI.
Chatbots, however, are only part of the AI picture. Companies are also using it to display customized ads on their websites. Whenever you see some kind of “guide” on the internet, there’s some kind of wisdom at work, even if it’s a rudimentary one.
And behind the scenes, businesses are using AI for more than just boosting e-commerce. Itinerary recommendations, insurance premiums, job applications and medical recommendations and security controls all use AI.
These big changes may make it easier, but they also raise a number of ethical and privacy questions. It is important for consumers to understand these concerns.
Although the use of AI in healthcare is strictly regulated – AI exists to support, not replace, doctors – in other areas AI companies can use personal information to train their models. Then there’s the question of how chatbots and other AI-based tools use the data consumers share.
At the moment, there are few rules about how AI uses your data. OpenAI says ChatGPT only uses data from 2021 or earlier, but some services connect directly to the Internet. This can lead to better results, but at the risk of privacy.
Lawmakers are drafting new laws to curb the use of AI; The European Union, for example, plans to introduce its AI Act by the end of 2023[2]. The UK government is also working on regulating AI. And regulators in the UK and USA have already fined businesses for using their data illegally in their AI systems.[3]
But we also have to do our own thing to protect our privacy.
The first, and easiest, step is to manage the information and data we share with chatbots and other AI tools. Avoiding sharing personal, financial, and medical information reduces the risk of that information ending up in the AI training database. And creative types may also need to be careful when sharing images, graphics or computer code, as well as any academic work.
Managing data accessed by AI through search engines, public websites or data brokers is extremely difficult.
With hundreds of data brokers collecting and selling information, including personal information, it is impossible to guarantee that no data will be used by an AI system. The only way to protect yourself is to keep that information online, or if it already exists, take steps to delete it.
This is where personal data removal services come into their own.
Services like Incogni work hard by interacting with hundreds of search engines, websites, social media and data providers, to remove information from being resold – whether it’s a caller, a cybercriminal or even an AI. manufacturer.
Analyze and improve your digital experience using Incogni now.
[1] UBS survey, reported by Reuters, 02 02 2023: https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/
[2] European Parliament, https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
[3] https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2022/05/ico-fines-facial-recognition-database-company-clearview-ai-inc/ and https://www.ftc.gov/news-events/news/press-releases/2022/03/ftc-takes-action-against-company-formerly-known-weight-watchers-illegally-collecting-kids-sensitive