What is Nostr?
Terence Eden’s Blog /
npub1lyw…ruez
2024-01-23 12:34:17

Terence Eden’s Blog on Nostr: The (theoretical) risks of open sourcing (imaginary) Government LLMs ...

The (theoretical) risks of open sourcing (imaginary) Government LLMs
https://shkspr.mobi/blog/2024/01/the-theoretical-risks-of-open-sourcing-imaginary-government-llms/

Last week I attended an unofficial discussion group about the future of AI in Government. As well as the crypto-bores who have suddenly pivoted their "expertise" into AI, there were lots of thoughtful suggestions about what AI could do well at a state level.

Some of it is trivial - spell check is AI. Some of it is a dystopian hellscape of racist algorithms being confidently incorrect. The reality is likely to be somewhat prosaic.

Although I'm no longer a civil servant, I still enjoy going to these events and saying "But what about open source, eh?" - then I stroke my beard in a wise-looking fashion and help facilitate the conversation.

For many years, my role in Cabinet Office and DHSC was to shout the words "OPEN SOURCE" at anyone who would listen. Then patiently demolish their arguments when they refused to release something on GitHub. But I find myself somewhat troubled when it comes to AI models.

Let's take a theoretical example. Suppose the Government trains an AI to assess appeals to, say, benefits sanctions. An AI is fed the text of all the written appeals and told which ones are successful and which ones aren't. It can now read a new appeal and decide whether it is successful of not. Now let's open source it.

For the hard of thinking - this is not something that exists. It is not official policy. It was not proposed as a solution. I am using it as a made-up example.

What does it mean to open source an AI? Generally speaking, it means releasing some or all of the following.

The training data.
The weights assigned to the training data.
The final model.

I think it is fairly obvious that releasing the training data of this hypothetical example is a bad idea. Appellants have not consented to having their correspondence published. It may contain deeply personal and private information. Releasing this data is not ethical.

Releasing how the data is trained is probably fine. It would allow observers to see what biases the model has encoded in it. Other departments could use the model to train their own AI. So I (cautiously) support the opening of that code.

But training weights without the associated data is kind of useless. Without the data, you're unable to understand what's going on behind the scenes.

Lastly, the complete model. Again, I find this problematic. There are two main risks. The first is that someone can repeatedly test the model to find weaknesses. I don't believe in "security through obscurity" - but allowing someone to play "Groundhog Day" with a model is risky. It could allow someone to hone their answers to guarantee that their appeal would be successful. Or, more worryingly, it could find a lexical exploit which can hypnotise the AI into producing unwanted results.

Even if that weren't a concern, it appears some AI models can be coerced into regurgitating their training data - as discovered by the New York Times:

The complaint cited examples of OpenAI’s GPT-4 spitting out large portions of news articles from the Times ... It also cited outputs from Bing Chat that it said included verbatim excerpts from Times articles.
NY Times copyright suit wants OpenAI to delete all GPT instances

Even if a Government department didn't release its training data - those data are still embedded in the model and it may be able to reconstruct them. So any sensitive or personal training data might be able to be reconstructed.

Once again, to be crystal clear, the system I am describing doesn't exist. No one has commissioned it. This is a thought experiment by people who do not work in Government.

So where does that leave us?

I am 100% a staunch advocate for open source. Public Money means Public Code. Make Things Open It Makes Things Better.

But...

It seems clear to me that releasing training data is probably not possible - unless the AI is trained on data which is entirely safe / legal to make public.

Without the training data, the way it is trained is of limited use. It should probably be opened, but would be hard to assess.

The final model can only be safely released if the training data is safe to release.What next?

I'll admit, this troubles me.

I want to live in a world where the data and algorithms which rule the world are transparent to us. There will be plenty of AI systems which can and should be completely open - nose-to-tail. But there will be algorithms trained on sensitive data - and I can't see any safe, legal, or moral way of opening them.

Again, I want to stress that this particular example is a figment of my imagination. But at some point this will have to be reckoned with.

I'm glad this isn't my problem any more!

https://shkspr.mobi/blog/2024/01/the-theoretical-risks-of-open-sourcing-imaginary-government-llms/

#AI #govenment #OpenSource
Author Public Key
npub1lywey3rjuskr7kstpvwj3xafa56qrkfc6r7f665rxvw4sv2jw6ps5vruez