What does AI Legislation Look Like?

AI legislation

We have all seen the rise of AI-based technologies and the fear-mongering around the existential risk to humanity and how AI will take over all jobs.

Before I get into it – this article was NOT written by ChatGPT, Google Bard or any other LLM. These are my own words – but of course that’s probably what I would said if AI did write this blog. 

Okay, back to the point, let’s put some cold water on that fire that AI is going to take your job.

Yes, AI will hugely revolutionise how we all work, how our employers interact and what they expect from us. But at the moment, AI cannot replace a human. Most of the improvement in AI in the last 12 months has come from Large Language Models, better known as LLMs, but their design models are built on learning from existing material fed into it from existing sources.

Human written and developed sources, from this data, can extrapolate, develop, modify and present a response to a natural language question from the data they have been taught, which needs to be closely monitored and checked to ensure integrity and verify its truthfulness.

One Big Flaw

This is where current AI draws its major flaw: it cannot distinguish fact from fiction – it only knows its data. This is why when you feed data back into AI from AI, the model degrades, and very quickly, you lose all sense of fact, and the results become conjecture or in some cases completely false. This is why, in the short term, AI cannot replace the human operating it because we all need to take the presented data and give it the once over and look at the text and say, “Does that really make sense, and is that really the truth or is it some biased opinion or even worse incorrect?”.

The same logic applies when you look at replacing human roles with AI; someone will need to be the gatekeeper to the operation, and someone will need to validate the output.

Some tasks are at risk

AI may well replace the roles that we currently spend hours on. For example, writing this article has consumed a good chunk of my time and is done by reading sources, digesting news and creating a written argument that pulls this together. In the future, AI will be the one writing this article, pulling from the latest data (This is important as current AI is using legacy data) to generate an article. This will then be checked over by myself and then a copywriter and published for you to consume. This won’t make the copywriter redundant, or me, but will change the expectations of our employer as to the expected deliverables of our roles. This is the key behind the headline that AI can’t replace humans in its current form. In future forms, it may well be able to alter many people’s working lives, but humans are still going to be a key part of the puzzle.

So how is it going to be regulated? 

To come onto the second point around control and legislation, we are seeing a continued wave of posturing by Governments and Big Tech about AI and how it needs to be controlled. Some of this is playing to the crowd. With the general public so concerned about AI and how it will affect them, legislators are looking to make sure they have a stance on it and try to control the headline.

That piece aside, there is a very important decision to be made by Governments and the global community about AI: Who sets the boundaries?

Is it going to be Big Tech like it was with the Social Media Revolution, where the world changed, and Governments spent ten years getting up to speed on the technology and how to control and police it? Or will the Governments of the world take control? Will they try to limit the development of AI, stunting its potential with overly controlling legislation which prevents Big Tech and start-ups from developing with AI and pushing the boundaries of what we currently know AI is capable of?

ChatGPT for business

What do I think?

Now, I see this as a very hard line to balance with some European countries’ swing for all-out AI bans before we have even got AI into our everyday workflows. It seems to be exactly what I mentioned before, stifling the development of those countries and the businesses that operate within them. That said, on the flip side, with no control and protection, we could see a world very quickly where AI not developed by us is being used to operate and control CCTV, Traffic light systems, and self-driving cars. This presents a very real risk to the citizens of a given country should the ability to control the AI fall into the wrong hands.

I believe that most regulation will come down to a risk vs reward model, where mainstream AI development is not prevented, but the integration of the technology too heavily into the core infrastructure or day-to-day safety of a country’s citizens will be restricted to prevent the risks I mention above. I don’t believe there is any silver bullet, and with any developing technology, you need to adapt and adjust to it. Businesses that are leveraging the technology need to be sensitive to the risks they present should their tool be leveraged by threat actors or hostile nations.

You shouldn’t fear AI in your business; you should also be aware that not all AI headlines are true to the nature of the risk. AI is amazing if used correctly, and if you ensure you understand the technology and put proper protection in place, it is a hugely powerful tool; when done incorrectly, you can easily put your business or your data at risk.

If you want to talk to one of our experts about how we can help you with your security and understanding of LLMs, then please call 01235 433900, or you can email enquires@planet-it.net, or if you would like to speak to me directly, you can reach out to me via DM or at james.dell@planet-it.net.

Can’t wait to integrate ChatGPT into your business processes? …actually, here’s exactly why you should wait!

ChatGPT for business

You can’t escape it. It’s all over the news and social media about this sudden wave of improvements in LLM (Large Language Models) or as most people know them at the moment Chat-GPT! 

Every large tech firm is rushing to integrate these technologies into their products with Microsoft launching co-pilot and Bing with Chat-GPT integration. Google is launching AI lead improvements to Workspace and Facebook accidentally leaked the source code to their LLM. 🤦‍♂️

With all of this going on you would expect that these products are at least secure and pose no risk to the users, businesses or the general public. And while I am wholly in favour of improvement to AI and ML, we must consider the risks these LLM pose as they begin to become part of everyday life. 

What are you talking about?

I should start by covering what an LLM is. Well in the words of Nvidia “A large language model, or LLM, is a deep learning algorithm that can recognise, summarise, translate, predict and generate text and other content based on knowledge gained from massive datasets.” To most of us what this means is that a system can take input in human language, not machine code or programming language and can then complete these instructions. Now, this can be as simple as how do you bake a cake. Or you can ask it to write an application that will convert files to pdf and upload them to an FTP server based on the IP address x.x.x.x and write an output file for me to show completion, in C++. The LLM will then go away, compute the question against the information it has been “taught” and will then come back with an answer.

chatgpt plus

 There are a few things we should all be aware of with LLMs as they stand today, these limitations are present but not always obvious. 

  • LLMs are driven by the dataset they have and may have complete blind spots to events if they occur post the data set provided, i.e Chat GPT (GPT-3) is based on a data set from 2021. So if you ask it about the F1 teams for 2023, it will either throw an error or will simply give you information it “generates” from the information it has been fed.
  • LLMs can therefore “hallucinate” facts and give you a completely incorrect answer if it doesn’t know the facts or if the algorithm works itself into a situation where it believes it has the right information.
  • LLMs are power-hungry. They need huge amounts of computing power and data to train and operate the systems.
  • LLMs can be very biased and can often be tricked into providing answers by using leading questions making them unreliable.
  • The largest risk is that they can be coxed into creating toxic content and are prone to injections actions.

Therefore the biggest question remains what is the risk of introducing an LLM into your business workflow? 

With the way that LLMs work they learn from data sets. Therefore, the potential risk is that your business data inside applications like Outlook, Word, Teams or Google Workspace is being used to help develop the LLM and you don’t have direct control over where the data goes. Now, this is bound to be addressed over time but these companies will 100% need access to your data to move these models forward so limiting its scope will have an impact on how they develop and grow. Microsoft and Google will want to get as much data as possible. 

As such you need to be careful to read the Terms of Use and Privacy Policy of any LLM you use. 

Other Risks

This one is scary, and it increases as more organisations introduce LLMs into the core workflow, is that queries stored online may be hacked, leaked, stolen or more likely accidentally made publicly accessible. Because of this, there is a huge risk of exposing potentially user-identifiable information or business-related information. 

We should be aware of the misuses risk that also comes from LLM with the chance they will be used to generate more convincing phishing emails, or even teach attackers better ways to convince users to enter into risky behaviour. 

openai

The final risk that we should be aware of is that the operator of the LLM is later acquired by a company that may be a direct rival to yours, or by an organisation with a different approach to privacy than when you signed up for the platform and therefore puts your business at risk. 

As such the NCSC recommends

  • not to include sensitive information in queries to public LLMs
  • not to submit queries to public LLMs that would lead to issues were they made public

At this point, Planet IT’s recommendation is not to integrate the new features from Microsoft and Google into your business workflow. Certainly not until proper security and data controls have been implemented by these companies and the risk of your business data being used as sample material to teach the LLMs is fully understood. These are emerging technologies, and as we continue to see change at Planet IT we are monitoring everything very carefully to understand how it will affect the security and data compliance of your business. 

More information from the NCSC can be found here : https://www.ncsc.gov.uk/blog-post/chatgpt-and-large-language-models-whats-the-risk

If you want to talk to one of our experts about how we can help you with your security and understanding of LLM then please call 01235 433900 or you can email enquires@planet-it.net or if you would like to speak to me directly you can reach out to me via DM or at james.dell@planet-it.net.

IMPORTANT!!

This article was NOT written by ChatGPT. It was written by this ChapJPD (James Peter Dell)

Looking for a technology partner?
Let’s talk

  • This field is for validation purposes and should be left unchanged.