Exploration, critical analysis, and opportunities of the use of AI

To say that AI has brought about a major revolution for those of us in the technology field is an understatement. In our case, it has even changed the way we work, and we have even launched our own AI-based product. However, its impact is not always positive and can affect many other professions, fields, and sectors.

At Lostium, we want to reflect on the implications of using AI in our work and our clients' projects so that we can use these technologies as ethically as possible and be aware of the implicit consequences of their use, in order to neutralize or minimize their negative effects as much as possible.

The optimistic view

When we talk about AI, we are encompassing a vast array of technologies under one term. With the help of AI, we can do such diverse things as creating or manipulating images, understanding and generating new texts and documents, obtaining summaries and transcriptions of audio or video, analyzing data, creating digital avatars, using synthetic voices from text, diagnosing diseases, controlling autonomous machines... and hundreds of other things. Obviously, not all uses and applications have the same repercussions and consequences.

In our case, we already use artificial intelligence in our daily work, so it would be absurd to maintain a highly critical position on the matter. Some tools and technologies that accompany us daily include:

  • Copilot when programming, which is essentially AI-assisted help that allows you to be much more productive and efficient when generating code.
  • ChatGPT and Google Bard for all kinds of assistance in generating content, from document skeletons, outline development, summaries, or small translations and grammar checks into other languages.
  • Voice APIs and TTS (Text-to-Speech) for content generation in audio. We have replaced a lot of reading time with listening to content and articles in podcast form.
  • LangChain and OpenAI's APIs for creating and developing new applications based on language models with our own data sources.

For all these reasons, we want to first highlight and acknowledge the countless advantages that AI offers us when incorporating it into the workflows and projects of our company. To list some of them: improved efficiency and productivity in our daily work, digital assistance in repetitive tasks (which we especially dislike), improved customer/user experience, improved data analysis and interpretation, assistance and improvement in creative processes, the possibility of offering greater personalization... Not to mention the new discoveries and developments that will emerge in the coming times, given the exponential growth in advancements in the field.

Despite the fact that this article will focus on some of the drawbacks and issues brought about by artificial intelligence, it is not intended to thoughtlessly or unconsciously jump on the AI bandwagon. Instead, we wanted to reflect on the implications of adopting and developing projects with this new set of technologies in a more ethical and conscientious manner, considering the broader impact.

Negative considerations of AI

Let's take a look and reflect on some of the issues associated with AI that can potentially affect the work we do in our studio, both present and future:

Ilustración robot disfrazado de ladrón

Intellectual Property

Currently, there is a moral and, above all, legal issue that has not yet been addressed regarding the ownership of the datasets used to train AI systems. This especially applies to those systems that manipulate images, illustrations, and photographs, but it can also extend to videos, texts or even voices.

There should be regulations that explicitly require permission (opt-in) and compensate artists for the use of their work in training AIs. Although currently the focus is on the possibility of opting out so that they cannot be used for this purpose, it is not the ideal solution since by default, permission should be requested before incorporating content into a dataset.

It is clear that many AI engines have produced results based on images obtained without the explicit permission of the illustrators to whom they belong. If you have ever used an image generation prompt with the annotation 'in the style of...' with any illustrator or painter of some recognition afterward, you will know what we are talking about. And we are almost certain that the referenced artist has not granted their work for the training of AIs, nor have they been compensated for training the AI in question.

Note to illustrators

There is an online tool, haveibeentrained.com, that allows you to check if your images have been used to train AIs and allows you to opt out, with the intention that all companies using these datasets in the future consult and respect the decision not to use these images. Recently, Stable Diffusion has announced that it would start doing this, although it is the only one, and there are no guarantees that it will continue to do so in the future:

Whether or not we agree with the above, it is clear that AI cannot match the added value that a professional illustrator brings to the intangibles of a project: tone modulation, depth, emotional connection, and unique perspectives and nuances specific to the social or cultural context to which it belongs. An AI, by its very nature, cannot take these aspects into account or, if it does, they would be completely biased by its own design.

In addition to this, there are other specific issues related to the generation of new content through AIs that are still awaiting regulation:

  • Ownership of new works generated by AI.
  • Use of registered trademarks.
  • Licensing.
  • Privacy and rights when individuals are represented.

The issue of intellectual property is significant and needs to be addressed sooner rather than later to legitimize certain uses and generative tools.

Ilustración robot con sombrero de papel de aluminio y ojos disparatados

Lack of Reliability

The decisions made by AI for generating various outputs and how they do it are complex to explain, even for experts in the field. This has implications for the trust we can place in AI's decisions.

The way AI makes decisions and provides responses, especially with Large Language Models (LLMs), sometimes leads to incorrect or completely invented answers, formulated with the same confidence and certainty as a true and correct response. This is known as "hallucinations."

Due to this phenomenon, it is very delicate to put LLM systems into production where the veracity of the response is critical. Although mechanisms to mitigate these hallucinations are being researched, they are a complex phenomenon that is not fully understood and, as of today, impossible to avoid.

Ilustración de robot con aspecto de Hitler y un sello de Incel Inside

Algorithmic Bias

This bias can stem from AIs trained on biased data or because the AI's design or algorithms themselves are biased.

Biases can lead to discrimination or discriminatory results, especially against minorities or groups that are underrepresented in the datasets used. For example, if a dataset for use in a human resources application mostly contains CVs from men, the AI will favor their selection over women.

Furthermore, these biases can lead the AI to recreate representational harms, stereotypes, or clichés of certain races, religions, ethnicities, genders, or minorities.

And finally, there are certain concerns due to this implicit bias in AIs and the impossibility of incorporating ethical or moral issues into the algorithms themselves, which we have to take into account when considering future uses, the target audiences they are directed to, data sensitivity, etc.

Ilustración de robot rodeado de sensores láser rojo

Security Risks

There are certain security risks inherent in the use of AIs that, although they can and should be mitigated, cannot be completely avoided:

  • Leakage of secret or confidential information: An AI-based system can store and process huge amounts of data, including many that may be confidential or strategic for the company, such as medical, personal, or financial data. Through hacking techniques like prompt injection attacks, a malicious user could make the AI reveal such data even if the algorithm is initially instructed not to.
  • Control: Due to the intrinsic complexity of AI systems and the difficulty of understanding how these systems work, they are difficult to control, making it also challenging to take measures to mitigate associated risks.
  • Intellectual property theft and identity theft: In addition to being trained on datasets of questionable origin, an AI could be manipulated to steal intellectual property, third-party algorithms, or trade secrets, as well as to impersonate users for fraudulent operations. Indeed, we recently saw an example of this with telephone voice cloning for service contracting.
Ilustración de robot disfrazado de pirata

Potential for Malicious Use

Although it is something we would not do in our studio, it is worth being aware of the potential for AI in the wrong hands to be used for malicious purposes. The emergence of certain AI-based technologies is being exploited by certain groups for information manipulation and the spread of fake news.

Some of the ways this is carried out include:

  • Creation of deepfakes: Deepfakes are fake videos created using AI techniques like deep learning to manipulate a person's face in an existing video. Deepfakes can be used to make a person appear to say or do something they never said or did. They can be used for malicious purposes, spreading misinformation, or damaging someone's reputation.
  • Image manipulation: It is becoming easier thanks to AI advances, and it will be increasingly difficult to detect if an image has been manipulated.
  • Dissemination of misinformation: Whether intentional and premeditated or unintentional due to hallucinations and invented content that AIs can create, as mentioned earlier.
  • Reputational damage: The combination of all these factors can be used to cause discredit and erosion of the public image of a person or institution.

What is our position on this?

From our point of view, we believe that there is no single valid stance for all the different uses encompassed by the term "artificial intelligence". There are uses with which we disagree due to their ethical and moral implications. Others require consideration of the project's nature and the sensitivity of the data to determine whether it is advisable to use AI-based technologies. However, it is undeniable that in many other fields, the advent of AIs will mark a before and after in how we work or perform certain tasks.

Although we continually experiment with generative images and rely on some of these tools for our work, we prefer not to use AI-generated illustrations trained by third parties as final artwork in our projects, instead of the creative work that an illustrator would provide.

Despite not usually having large budgets, we limit ourselves to using generative images as a tool for generating our own illustrations, for creating sketches, developing proposals, or illustrating internal documents, to obtain quick examples that can serve as inspiration for the styles to be developed, in creating mood boards for clients, or for generating compositions or poses for our own illustrations. We did all these things before by searching for images on the internet, Google Images, Pinterest, or other specialized places.

For other specific uses and projects, image manipulation AIs can provide undeniable value. For example, the use of AIs to modify user images or images we own, perform repetitive manipulation tasks that would normally be done manually through editing tools, or programmatically generate variants and modifications to our own images (background removal, smart cropping, etc.).

As for the other issues we have discussed in the article, we are guided by common sense:

  • Avoid using LLMs when the accuracy of AI results is critical for the project and cannot tolerate hallucinations.
  • Be extremely cautious when the nature of the project may favor the implicit bias of AI to present certain communities, groups, or minorities in a distorted, discriminatory way, or showing representational harm, as well as sensitive information that may hurt or offend other groups or communities.
  • Always verify that the results obtained cannot be maliciously used to damage the reputation of third parties, spread misinformation, or propagate fake news.
  • In terms of security, although we are aware that it can never be absolute, double-check that the system is robust and secure to prevent the leakage of sensitive information or susceptibility to prompt injection attacks or similar risks.

One thing we have no doubt about is that exciting and eventful years await us in this field. From our studio, since we started developing projects that integrate different AI systems, we would like to be part of and contribute to offering an optimistic, yet critical and fair view of these technologies and their implications.

If this text has helped you organize your thoughts or has prompted you to reflect on the uses and implications of AIs, or if you have discovered any aspects you were previously unaware of that you will find useful in the future, we consider our effort worthwhile. If you are also willing to share your thoughts with us or comment on any other aspect of what we have discussed, we would be delighted to hear from you. We are here to learn.