2 Comments
author

I agree that it's hard to make a GPT-based model forget what it has already learned. But I think it's easy to direct a GPT-based model not to learn from new information. In other words, the GPT-based model can be put into sandbox mode in which some new input isn't used to train the model and is promptly forgotten.

Expand full comment

Contrary to your assumption (or what I understood it to be), making a GPT-based model “forget” something it has already “learned” isn’t easy at the moment, and the path toward making it easy isn’t clear. Depending on the level of accuracy you’re looking for (and at least sometimes the law will want 100%), it can require retraining the model from scratch, which is prohibitively expensive...

That’s one of the reasons why the early judgment in Italy regarding ChatGPT and GDPR must be regarded as important, IMO.

Expand full comment