3 harsh truths about AI in translation

Yes, AI is bringing something new to the table, it’s exciting, it’s meant to revolutionise, well, everything: from the way we go about our days to the way we work and communicate. But … how far are you willing to go?

Many people and companies have been experimenting with AI and exploring the different opportunities it’s creating. You can now generate brand-new images at the click of a button (well, brand-new maybe not so much), you can get detailed answers to all of your questions (whether they are accurate or not, that’s for you to decide), and you can even have all of your content translated in a matter of seconds (it might not be a very good translation, but hey, at least it’s a translation).

Yet, what many seem to forget – or maybe ignore, is that AI has a lot of hidden and not-so-hidden dangers. So, before you decide to feed your information into the machine, let’s have a look at everything it encompasses.

How AI is used in translation

First, a quick sidenote on how AI is used in translation and what different tools exist nowadays.

On the one hand, we have machine translation tools such as Google Translate and DeepL. These use Neural Machine Translation (NMT) and deep learning technology to improve their translation capabilities. These machines have been trained on a corpus of parallel texts (source texts and their existing translation) in order to predict the most likely sequence of words in a translation. Also, these machines improve their results thanks to the feedback of users, who can suggest alternatives or better translations.

On the other hand, there’s the AI chatbots, such as ChatGPT, which are large language models (LLM). These are, in fact, conversational tools, in that these chatbots respond to prompts from users by processing large amounts of datasets and adjusting their response depending on the next prompt of the user. Many people are now also using them to translate their text, even though it’s not their primary function.

Dangers of AI

We’re probably all aware of the most well-known risk of AI, in that these machines sometimes hallucinate and give answers that are completely made-up and wrong, even though they seem legit. As a user, you need to be critical and double-check the output of these machines.

Now, apart from these hallucinations, there are other dangers when using AI to translate your content, that can have disastrous consequences for people and companies.

✇ Confidentiality issues

AI offers an attractive solution for people and companies who want to have a document translated, but aren’t willing to pay for it. However, we all know ‘free’ doesn’t exist. Everything has its price, so does AI.

To this day, we don’t know how the data and content that you feed into the machine is being used, processed and stored. We know that these AI tools use the input to learn and improve their outcome, so who’s to say your personal data or company secrets won’t be revealed to other people?

Yes, there are privacy laws in place both in Europe and in the United States of America, but these were conceived before the AI era and did not take into account the possible implications of AI.

My advice? If you are determined to have your documents and content translated by a machine, at least make sure you erase any personal data or confidential information, just to be on the safe side.

✇ Copyright violations

If you have experimented with AI tools, you have probably been surprised by the striking output they can generate, as they can write creative copy and even poems. They can do so because they are trained on billions of archives of text (and images in the case of image generation) and use these archives to formulate the best response to a prompt. That means, basically, that those machines are possibly violating existing copyrights and trademarks.

Also, it’s not clear yet who owns the content that is created by AI machines. Is it the machine itself or the user? If it’s not the user, are you really allowed to use the output on, for example, your website, in ads, or on your social media? And if you're the new owner of output that is violating a copyright, is there a risk you might get sued for it?

These are all very important questions that need to be answered before risking it all.

✇ Huge environmental impact

It’s true that AI has the potential to tackle environmental issues by mapping them out and predicting possible outcomes, but there’s also another side of the equation.

The large data centers that house the AI servers produce electronic waste and require huge amounts of electricity, water and other resources. So instead of tackling environmental issues, they are actually damaging our environment and aggravating climate change.

If sustainability is also a priority for your company, then you shouldn’t be using AI just for the sake of it, because you’re actually contributing to our climate change instead of fighting it.

Still willing to risk it all?

AI is exciting, it’s changing our everyday lives and many people already love what it can do. But AI also has its limitations and even worse, its dangers.

It’s my humble opinion that we’re not there yet and that we shouldn’t be using AI on such a large scale until we really know the dangers of using it and having the measures in place to fight the harmful consequences of AI. Only then will we be fully equipped to use these tools responsibly in our personal lives and in our businesses.

So next time you’re about to generate an image or have a text translated, take a moment to think about these potential consequences and consider whether or not it’s really worth it.