It just got personal…

In times of conflict, a bomb hitting one’s mother’s home can alter one’s perspective, as a friend once mentioned to me. While one may not necessarily endorse a war, human nature often leads individuals to take a side when the conflict hits close to home or causes destruction. This tendency is quite common, with those who manage to maintain objectivity despite personal implications being exceptions to the rule.

Phillip J. Clayton
5 min readJun 10, 2024
Image

Conversations involving creatives and major corporations frequently lack context and objectivity, often characterized by defensive, fearful, and reactive viewpoints. I recently became aware of Adobe’s Terms of Service and its access to users’ work through discussions with friends, both in person and online. It is worth noting that this TOS has been in place for quite some time.

Typically, individuals do not pay much attention to certain matters unless they directly affect them, particularly in the professional realm where financial considerations often take precedence. When one’s livelihood is at stake, it is only natural to react. If creators were compensated for each instance in which AI utilized their content for learning purposes without jeopardizing client relationships, it is unlikely that many would raise objections.

Our responses are often shaped by the unfamiliar, and to my knowledge, there have been no reported cases of creative professionals taking legal action against Adobe regarding their AI technology. This suggests that no one’s work has been compromised or sold to external parties. However, if such instances exist and you are aware of them, I would appreciate it if you could share them with me.

Fear remains a prevalent reality for creative professionals and their work. Personally, I prefer to thoroughly examine various perspectives and understand the situation before evaluating its impact and considering potential compromises. To begin, I will outline the findings of my research.

A breakdown of Adobe’s machine learning (ML) practices and how it relates to their terms of service and viewing user work:

Adobe’s Terms of Service:

Adobe’s General Terms of Use (https://www.adobe.com/legal/terms.html) grant them the right to analyze user content using techniques like machine learning.

This analysis is for various purposes, including improving services, detecting fraud, and responding to support requests.

What can Adobe see?

There have been concerns that Adobe can access all user content.

However, Adobe has clarified that they primarily focus on analyzing data about how users interact with the software, not the specific content itself.

This might include things like what tools users utilize most or analysis of general workflow patterns.

The controversy:

There was user backlash due to concerns about privacy and ownership of creative work.

Some users worried their content might be used to train AI models without their consent.

Adobe’s response:

Adobe has updated its policies and communication to be more transparent about ML practices.

They emphasize they don’t claim ownership of user content (https://www.adobe.com/legal/terms.html).

There are also specific restrictions on using user content for training generative AI features (https://www.adobe.com/legal/licenses-terms/adobe-gen-ai-user-guidelines.html).

What this means for you:

While Adobe uses ML to analyze user data, the focus seems to be on improving the software and user experience, not viewing specific creative content.

It’s always wise to review the terms of service before using any software.

If you’re still uncomfortable, you can explore adjusting your privacy settings within Adobe products (if available).

For further information:

You can find the full Adobe General Terms of Use here: https://www.adobe.com/legal/terms.html

For details on Adobe’s Generative AI User Guidelines: https://www.adobe.com/legal/licenses-terms/adobe-gen-ai-user-guidelines.html

Finding Balance in a Changing Landscape: User Privacy, Machine Learning, and the Future of Creativity

The digital age presents a constant dance between innovation and user privacy. A recent article in Fast Company (“Creatives are right to be fed up with Adobe and every other tech company right now”) highlights concerns creatives have regarding Non-Disclosure Agreement (NDA) work and how it might be accessed by Adobe’s machine learning (ML) tools.

This sparks a crucial conversation: how do we balance the need for technological advancement with the protection of sensitive information?

The Power of personal impact

John Maynard Keynes famously said, “When the facts change, I change my mind — what do you do, sir?” This rings true for user concerns. Issues often become personal when they directly impact our work or lives. For instance, the initial outrage over social media privacy concerns ultimately led to platforms offering more customization — a response driven by user demand.

This pattern holds true across industries. We readily accept curated content on Netflix, personalized shopping experiences, and targeted advertising — all fueled by data collection. However, the line blurs when the data source becomes our own creative work protected by NDAs.

Privacy vs. progress: A false dichotomy?

The desire for security and a lack of transparency can lead to fear and reactive stances. While privacy and copyright are vital, is complete resistance to new technology the answer?

Finding common ground

  • Transparency: Companies like Adobe should be transparent about ML practices and data usage.
  • Legal recourse: Legal options exist for users who believe companies have violated their agreements.
  • Open dialogue: A collaborative approach is needed. Can creative professionals and technology companies work together to establish ethical guidelines for AI development?

The cost of convenience

Technology thrives on data. Privacy comes at a cost — the potential for less personalization and less advanced features. We must decide what level of intrusion is acceptable for the benefits of progress.

The question isn’t “ease and comfort vs. no change.” It’s about finding a balance. We can embrace innovation while holding tech companies accountable. Can AI tools learn from user work without compromising privacy? Can we create a future where creativity flourishes alongside responsible technological advancement?

The answer lies in open communication, collaboration, and a willingness to adapt as the technological landscape evolves.

I do not profess all-knowingness, I could be wrong on everything I have said, perhaps I even missed something… What I advocate for is an understanding that most things in this world are about trade-offs. And that most things don’t last long, be less reactive and more observant, and engage in benevolent conversations.

--

--

Phillip J. Clayton

I like money but I love my time - Life is about trade-offs: Brand consultant | Strategic advisor | International Brand & Marketing design judge.