In today’s world, continued awareness of cybersecurity – and taking actions to ensure the safety of your systems and data – is crucial. This is especially so given the rapid emergence of new technologies such as Generative Artificial Intelligence (GAI). Chat GPT and Google Bard are examples of the new breed of GAI-driven applications.
As with any technology platform, GAI tools offer a wealth of exciting possibilities but also come with potential security risks. Now, more than ever, is a good time to refresh your approach to online security and I would like to highlight three particular risks in relation to GAI.
- Scams – GAI obviously extends the possibilities for scams with criminals increasingly using this new technology for fraudulent purposes. A scam video appearing to show billionaire Elon Musk promoting an investment opportunity is a recent example of the misuse of GAI-generated content. You can view this ad – clearly labelled as a scam – on YouTube.
- Misinformation – GAI tools can produce incorrect or misleading outputs. This was illustrated in a celebrity court case this year where AI-generated misinformation has led to calls for a retrial.
- Data leakage – GAI tools could offer your inputted data to others who query similar things. Earlier this year, for instance, it was discovered that a Microsoft AI employee had accidently leaked 38TB of sensitive internal data.
To help you to securely navigate this emerging space, here are five tips when using GAI-driven tools and applications:
Be aware of fake AI apps and other extensions
These may give you a false sense of security while exposing your device to malware and other threats.
Tip #1:
Before you use a GAI tool, take the time to check and verify its authenticity.
Always check and validate AI-generated content
Some GAI apps may be prone to the generation of plausible sounding but incorrect/misleading outputs.
Tip #2:
Review and verify AI-generated results, especially statistics and facts.
Be cautious of data you feed into GAI tools
Generative AI tools are ‘trained’ from inputted data, so what you put in may be stored and used for other purposes.
Tip #3:
Sharing data in open AI tools could result in a breach. Only use solutions approved by your firm.
Review settings to prevent data leakage
GAI models could be trained to learn from prompts and data could be offered to others who query similar things.
Tip #4:
Restrict sharing options when using Generative AI – this prevents the tool using the data you enter to ‘train’ itself.
GAI can be used for malicious purposes
Cybercriminals are exploiting GAI tools to code malicious software, break encryption, create deepfakes*, clone voices, and produce convincing phishing emails.
Tip #5:
Verify external or unusual requests and think carefully before opening attachments or clicking links in emails.
I have focused here on the risks of Generative AI but it’s important to remember it also offers lots of potential for firms. There are tools available which utilise GAI that can help advisers save time and money, for example, and so it therefore has the potential to drive efficiencies within a business. That’s why keeping up to date with this evolving technology is vital.
*Any video or image in which faces have been either swapped or digitally altered with the help of artificial Intelligence, usually for the purposes of fraud or misinformation.
Note: All images for these tips were created using Firefly, Adobe's generative AI image tool.
Latest articles
Post-Budget and US election – is it time to invest again?
Where do investors stand following two significant events?
The fallacy of a ‘smile’ retirement
There is a view that expenditure in retirement resembles the shape of a smile…
Adviser optimism abounds as firms position themselves for growth
Each year, we examine the opportunities and challenges facing advice business…