Ric Raftis consulting logo

Ric Raftis Consulting

Navigating the Legal Labyrinth: Who Owns AI-Generated Content?

Ric Raftis consulting logo

This morning, I listened to a podcast discussing the issue of copyright in relation to generative AI. The point was made that anything produced by generative AI cannot be copyrighted, and it is not effectively owned by anyone. Interestingly, the podcast also mentioned lawyers and legal firms arguing that organisations should be responsible for any content generated by AI within their organisation. 

As someone without legal qualifications, I question how an organisation can be held responsible for something they do not own. It seems that it must come down to the user rather than copyright. So despite the generated content allegedly being at fault, it is supposed to be the person who disseminated that information who will be held responsible. To provide an analogy, to me, this is like me giving someone a lithium battery and charger as a gift. It sets fire to their house and I get held responsible for that? Surely not. Surely it was the manufacturer of the products.

If we follow the same line of reasoning, then the company that generated the content, i.e. Open AI or Anthropic, generated the end product, not the user. So would they not be held responsible? 

Personally, I dislike with considerable intensity these arguments where as a society we are always looking for someone to blame, howver, it is an entire industry and economy to itself. There are always legal “experts” that can be engaged on both sides of an argument which makes the entire system a bit of a farce. If anyone can provide clarity on this matter, it would be appreciated.

I understand the relationship between an organisation and its responsibility for work done by employees. For example, if an employee repairs a car or machinery, the organisation may be held liable for damages if the work is not completed properly. There is a direct relationship between the organisation, employee’s contracted work, and the responsibility towards the customer in return for payment.

Let us consider a scenario where an organisation has employees using generative AI without their knowledge. In this case, the content produced is not technically owned by the organisation since it cannot be copyrighted or owned by anyone. So who is responsible? Is the employee responsible for being deceitful and passing off the work of generative AI is their own? If the organisation lacks a generative AI policy, are they considered negligent for not providing adequate training for their employee?

And while we are on the matter of legality, it is my understanding that if I put something on Youtube, Facebook or here for that matter, that I own the copyright and provide the platform with permission to use it. 

The drafting of this entire article raised so many questions for me that I decided to consult someone who knows far more than I. Here is the prompt that was ideated from my thought processes:

Me: I am continuing to expand on this article and I have a question. I understand you are not a lawyer, but wish to propose a scenario.

If I create a video for Youtube or an article for Medium, my understanding is that I own the copyright. If an AI company scrapes that content from the web and uses it to train their AI model, is that not questionable. If we take the case of media outlets striking licence agreements with AI companies for using their content, then it is interesting that they are not passing on a portion of this to the journalist who wrote the content despite the fact that they own it. The additional revenue being generated was unforeseen and therefore not included in the journalist’s salary package.

Formatted GPT4 Preview Response

Agent (GPT4 Preview): You’ve touched on a complex and increasingly relevant issue within copyright law and AI ethics. Here’s a breakdown of the scenario and some considerations:

Ownership and Copyright

When you create original content like a video for YouTube or an article for Medium, you generally retain copyright to that content unless you specifically relinquish those rights (e.g., through an agreement with a platform or publisher). This means you control how that content is used and distributed.

Use of Content by AI Companies

If an AI company uses your content to train their models, several factors come into play:

  • Fair Use: In some jurisdictions, using snippets of content for training purposes might be considered “fair use,” especially if the content is transformed in a way that doesn’t compete with the original or doesn’t diminish its value. However, this is a grey area and often subject to legal interpretation.
  • Consent and Licensing: If the content is used beyond fair use, ideally, the AI company should obtain consent or arrange licensing agreements with the copyright holders. This is what media outlets do when they strike deals with AI companies.

Compensation for Journalists

The issue of compensating journalists when their work contributes to revenue through AI training is thorny:

  • Contractual Terms: Typically, journalists are paid for their work according to the terms of their employment or freelance contracts. These contracts might not have provisions for future uses or revenues unless specifically stated.
  • Ethical Considerations: There’s a growing debate on whether companies should share profits derived from AI that uses existing content with the original creators. This touches on broader ethical considerations about the fair distribution of benefits from AI.

Moving Forward

Given the rapid evolution of AI technologies, there might be a need for new legal frameworks or revisions to existing copyright laws to address these issues adequately. Stakeholders, including content creators, AI companies, and legal experts, need to engage in dialogue to define fair practices and ensure that all parties are justly compensated and acknowledged.

This scenario indeed highlights the need for clearer regulations and ethical standards in the AI industry, reflecting the evolving nature of content creation and usage in the digital age.

End of Agent’s response.

Comment on Agent’s Response

The agent raises some extremely valid concerns. This is what I personally enjoy most about AI; the ability to interact and ideate particularly in areas of complexity such as this one. The almost independent nature of the response can only cause you to pause and consider and then ask additional questions for clarity. I will continue this conversation with the agent, but the above is sufficient for this article.


This emerging area seems to present a minefield of challenges and potential scare tactics as people raise concerns along the way. At times it can be difficult to separate the genuine concerns and questions from self interest. I am keenly watching this topic and an open mind will be essential to consider all sides of the debate. In addition, because so much of it is so new, critical thinking will be crucial to gain insight into some of the as yet unasked questions in the space. What are your thoughts on the matter? How do you see some of the questions raised in this article. Let me know in the comments below.



Leave a Comment

Your email address will not be published. Required fields are marked *