AI (Artificial Intelligence) is now being used for a range of purposes, some of which can have a very positive impact on modern life, while others are raising serious concerns about their use and their impact on both society and the environment.
An important distinction to make when considering these issues is between digital tools, which are increasingly using AI technology to underpin their functionality, and generative AI (also known as Large language models or LLMs).
The most common type of AI systems currently in use are those designed to perform specific tasks, such as analysing large data sets, responding to simple queries on websites (chatbots), managing image recognition, acting as search engines or filtering spam emails. They excel at their designated function but cannot generalise beyond it.
Digital tools or Gen AI?
Digital tools, such as grammar/spelling checkers, image manipulation software used to sharpen images, or automated diary reminders, are also increasingly utilising AI technology to improve their functionality. These tools aim to support human creativity by using limited generative functionality, targeted at specific purposes.
Generative AI (Large language models or LLMs) are a type of artificial intelligence that uses machine learning algorithms and massive data sets to replicate human language. This gives them their ability to translate languages, predict text, and to produce content. In contrast to natural language processing models (NLPs), LLMs train on much larger data sets, allowing them to become more complex and to generate what appears to be conversational interactions.
The increasing use of these systems raises ethical questions about their selective use of data and associated biases, the energy requirements to run them, their ability to (or inability to) make decisions, and – from the perspective of creatives, authors, artists, and actors among them – their use as content creators.
Is this distinction clear?
No.
Tthe distinction between digital tools and generative AI is fuzzy, as some digital tools are now utilising LLMs to underpin their AI capabilities – but for most, the clear difference lies in how these tools are used, and what they produce.
LLMs are capable of responding to user inputs with unique outputs, and responding dynamically in real-time, which makes them particularly useful for powering interactive programs like virtual assistants, chatbots, and recommendation systems such Co-pilot. This sort of use can reduce administrative time and costs while not immediately (or obviously) threatening creative endeavours. It still creates concerns, however, especially environmental ones.
What are the concerns?
It is important to understand that, although tools like ChatGPT, Google Gemini, or Microsoft Copilot can give the impression of a self-aware AI, there are no conscious decisions involved in their output. Their responses emerge through algorithms that identify the statistically most likely response based on their training data and the user’s prompt.
In addition, many of the tools that generate text or images in response to prompts, such as ChatGPT, Claude, or Midjourney have been trained on materials ‘scraped’ from the internet without regard for copyright or consent from their original creators.
They generate and ‘create’ by analysing these existing materials, usually without understanding the original content. AI images can often be identified by ‘tells’ such as too many fingers on a hand, inconsistent shadows, or other misshapen items. In textual generation, opinions can be presented as fact, and facts can be distorted, or simply made up without supporting evidence. These ‘tells’ are slowly disappearing as systems upgrade, but the issue continues to be the absence of human discernment and understanding. GenAi text or images, while sometimes very convincing, are bereft of creative input, and lack the qualities that human creation imbues.
This is the reason for concerns being raised about the impact of these tools on creative activity. Authors and artists are angry about the inappropriate use of their copyrighted works. They are concerned about the potential for these tools to limit future works and work opportunities. They are also worried about scammers and thieves using these tools to produce misleading content, to generate fake images, to plagiarise text, and to obtain money through false pretences.
Is this happening in our community?
Yes, it is.
On-line merchandising platforms are becoming flooded with offers of cheap embroidery ‘kits’ – using AI generated images rather than pictures of actual stitching. These images lure buyers with a product that cannot be stitched to match the illustration, and purchasers rarely receive clear instructions with their purchase. Not only do they deprive genuine artists of potential income, but they also disappoint and discourage buyers, many of whom are only just beginning to explore the world of embroidery.
The Guild is strongly opposed to this practice. A number of textile artists are now offering advice on how to identify these ‘fakes’ and have provided guidance on ways to analyse whether an image is a genuine stitched item or an Al generated fake. This is one useful guide: [Artificial Intelligence In The Embroidery Community – Lolli and Grace]
While we recognise some of the potential offered by these kind of AI tools for exploring ideas, we do not advocate or support their use as a source of finished design. We firmly believe that genuine creativity lies entirely in human hands, a skill that can be developed through practice and learning, and which should be encouraged for all.
