What Do You Need To Know About AI, Actually?
When it comes to understanding why there are some people who range from mildly to wildly against the use of AI, it can be difficult to sift through all the data as to why you should or shouldn't use it.
On the one hand, there are lots of incredible uses of AI in general science and technology, and in things like accessibility, when used properly and ethically.
On the other, AI is literally not what the hype says it is. No matter what the PR spins say, AI is not a tool like a hammer or the printing press. And it is doing humanity very few favors right now.
If you're not familiar with what AI software is exactly, or how it works, I encourage you to learn more about it, and encourage anyone who says it is "just a tool" to learn more about it. Again, it is not a tool.
To start with how AI works, it currently does not think. It can not conceive of concepts. It is a literal robot. Most of the output is using what is called an algorithm, which is basically just something that says "if this happens, then do that" or "do this thing for so many number of times" or some such thing.
Now, as a blanket statement, anyone calling AI "inevitable" and pushing people to use it or be left behind is simply pushing an apathetic mindset which is very destructive. Especially because apathy is the only sure-fire way for something to be inevitable - no one cares enough to do anything about it.
Even the most evil things in the history of humanity have been changed eventually. Nothing is inevitable (sorry, Thanos).
The facts about AI:
You’ve probably heard this, but AI uses wildly high volumes of water (astronomically more than general server room maintenance, which already uses a lot of water). This is affecting both available drinking water in towns near large data centers, and the general available water in the world, which is increasingly damaging to nature and life on this planet.
It’s a lot to wrap your head around. You can read more in this article about data centers and water consumption.
Most users of AI experience intellectual decline, and don't actually retain information or learn anything. This is hurting people's ability to think and process information for themselves. You might say "that's not me" but every time you set an example as an AI user for tasks like this, you're supporting others to do the same. And per a study done by MIT, it is you.
You can read more in this MIT study called "Your Brain On ChatGPT."
In order to train the AI, there are training facilities whereby humans have to sift through the data being fed into the AI. These people are experiencing severe psychological damage and many have been driven into suicide from the volume of extensively graphic content they're forced to consume as they have to watch it in order to filter it out. The companies running this is who you're funding when you pay for an AI subscription and use it, and you're keeping this all happening by upholding a demand for the service.
You can read this article about the AI chatbot human toll.
If none of that bothers you, or you're able to justify it still, then please know that you're actually only hurting your brand when you use AI. Per the Survey Monkey 2025 State of Marketing Report, almost 50% of consumers surveyed refused to buy anything from businesses using AI. And almost 80% said humans understand them better than AI.
You can read the report yourself. It tries to hype AI as much as it can, but the numbers are right there.
And at the bottom of it all, why so many artists and other creatives are fighting further development of "artistic" Generative AI models (software that spits out images, text, music, graphics, logos, stories, posts, videos, "ideas", etc): Every single piece of training data used to program every single generative AI program currently known at the time of this writing, all of it has been stolen from human artists. Every. Single. One. And every time you use a generated anything for work or "for fun", you're supporting theft from of artists.
There is no two ways about that. Every single public AI model has stolen from artists to train its software. Not as progress - as raw theft. And to add insult to injury, these same software companies arguing that data isn't owned, and that it's vital for progress, they have their staff under tight NDAs and file copyrights and trademarks on their software. But everything created by everyone else doesn't need to be licensed or paid for, according to them. They call it "fair use", twisting a law which is meant to be used for things like documentary filmmaking, or legitimate journalism.
This is a great article on how theft is not fair use.
If that doesn't bother you, then just know that you can't copyright much of what AI spits out for you, because it is stolen. And there is a big chance what you're putting out is not original, and is very close to the stolen training data which the AI's output is based on, so you might end up embarrassingly finding what was spit out for you actually is almost the same as someone else's existing brand or work. Human revisions of what the AI spit out might make it arguable, but because of the deep legal implications of the stolen training data for the AI, it is a risk. By using AI you're limiting your ability to copyright and own anything it spits out for you.
You can read the "Copyright and Artificial Intelligence (Part 2)" report from the US government.
After all that, if you're still trying to explain why it is OK, and how you're fine to use it, just know that AI does not actually understand what it is "saying." As a result, it can return absolutely invented information with no basis in truth or fact. It makes stuff up. It even blocks data or chooses to omit data, because it just does. It gives incorrect or incomplete information. And it can't be trusted outright. So you could do a full search of something using AI, and end up totally wrong because you were given completely false, incomplete, or incorrect information, and it can cost you your job, or your reputation.
This article explains how all that works.
Now, nothing is helpful if it is just doom and gloom. And, there are a lot of amazing potential uses for AI.
So, for the positives, there is a really wonderful organization which was just launched called "The Creatives Coalition on AI" which is working to create a safe, sustainable, ethical world around AI use. It is a really great group to support, while also reaching out to your lawmakers and voting on issues which ensure greater protection, regulation, and ethics in the world of AI.
You can learn more about The Creatives Coalition.
If that is too much involvement for you, the least which can be done is to simply not use AI wherever possible. Turn it off on your computer (Windows is currently using everything you do on your computer to train AI unless you turn it off, as is Google/Gemini if you use Gmail).
In general, you don't have to have a ChatGPT account or use any paid software. And you don't have to use AI for anything other than general search engine to find things or get answers linked to the source so you can read more about it yourself (and using AI in a search engine is almost unavoidable right now as even basic Google searches trigger AI).
While Perplexity.ai is still a large language model, it tends to hallucinate the least and you don't need to log in to use it. When done with a Perplexity question, you can close the tab and it doesn't remember the conversation with you specifically, so you're not giving it much.
Beyond that, don't give AI companies your money or your support. Don't create an account with them so they can't use you as part of their numbers they present investors. And don't use AI for things like "trends" or for anything creative of any kind (as you are outright stealing from artists every single time you do that). Even just for "ideas". If you need ideas, turn off AI on Pinterest and make a vision board to get inspiration. Don't use AI.
Share this blog post, say something to the people you see using AI, and educate them. Change is possible, as is a positive future with ethical and sustainable progress in the direction of greater technological advancement.

