OpenAI created a worldwide buzz on generative AI when it made ChatGPT widely available to consumers in late 2022. This technology has since undergone rapid development with the accessibility of API’s that people outside of OpenAI can test and use; while contributing to its models. Due to all the hype surrounding this technology, there is a growing concern among organisations of falling behind or missing out. AI/Generative AI is a hot topic in tech sales. There’s an undeniable impression that if you are not at least talking about AI that you’re not in the know. Is there an opportunity for you to use AI in your organisation and do you need to harness AI at all?
Artificial Intelligence (AI) is a term used to describe how machines perform thinking capabilities similar to human beings in a human-like manner. These capabilities include learning, understanding, reasoning, and interacting. In modern day applications, AI can take on forms including algorithms, as part of a process e.g. data analysis, or as an end-user product.
Generative AI (GenAI) is a model that takes on raw data or training data and generates something new (similar but not identical). The “thinking” process takes statistics into consideration, which generates the best response given a prompt from the user.
Foundation Models serve as the starting point to bigger and more complex models e.g. in the context of text-based (LLMs) generative models, its foundation model would store a large volume of words as input and compute for an appropriate output (statistically probable text) based on the training.
Large Language Models (LLMs) - unsupervised training in unstructured data. It is under this category of AI where OpenAI’s ChatGPT belongs.
Advantages of Generative AI (Foundation Models as opposed to the Task-specific AI)
📈 Performance: the response is fast. This beats the pace for when a user manually goes through several pages of text.
🎯 Productivity Gains: the potential to provide efficient support to workers e.g. summarising an output from multiple sources from the
Internet as opposed to one worker accessing, reading, understanding one source at a time.
Disadvantages
💲 Compute Cost: small enterprises are unlikely able to train Foundation Models on their own without heavy financial and resource investment.
🤝 Trustworthiness: given the volume and complexity, data sets can’t be reasonably manually vetted in its entirety or would require significant
effort and time from enterprise resources.
The use of some form of automation in industries in their operations and processes is not new but the utilisation of generative AI is a rapidly developing space in business. Here are a few examples of how different industries may choose to use generative AI and the corresponding AI products that are available today.
Pulumi Copilot
This is a conversational chat interface that uses LLMs to help users quickly execute a variety of cloud infrastructure management tasks. A user is able to converse with the AI through chat and ask questions about any topic related to their cloud infrastructure.
Pulumi covers use cases such as:
Project CodeNet
The goal of this project is to provide the AI-for-Code research community with a large-scale, diverse, and high quality curated dataset to drive innovation in AI techniques. The intended use case is for AI to potentially learn and distinguish correct codes from problematic ones; eventually down the line, the intent of the project is to explore automatic code correction. This use of AI has the potential to rewrite and modernise legacy code.
GitHub Copilot
This is an AI coding assistant for engineers. This offers autocomplete-style suggestions as you code; analyses the context in the file you are editing, as well as related files, and offers suggestions from within your text editor. GitHub Copilot is trained on all languages that appear in public repositories.
The intent for this capability is to reduce coding and debugging time for software engineers especially for the repetitive/routine tasks.
Sora (OpenAI)
This has a text-to-video model that can create realistic and imaginative scenes from text prompts. This technology has recently been used to create an AI-generated music video, which as of this writing, can easily be distinguished as something created by AI. It is under this category of AI use that highlights the potentially malicious use of “deep-fake” media.
Unreal Engine (Epic Games)
While this company is better known in the video gaming industry, their product provides work optimisation tools to developers that allow them to cut down on development time. Time efficiency can be achieved by utilising their models for tooling animation, rendering, and physics. This is more than just reusing existing assets but allowing AI to fill in the gaps between an initial state to a target state e.g. animating movements such as a character walking in a 3D environment.
AI has also been specifically built to generate a post for a user provided a text prompt.
Sample tools: SocialBee, Radaar, and Semrush
Given a text prompt, these tools can generate descriptive text about a topic and also a highly relevant image to go with the narrative of the post. Social media managers and digital marketing personnel may find this useful since they would need to generate a large volume of posts regularly to maintain and increase engagement for the accounts they manage.
Stable Diffusion/Diffusion Art, OpenAI (DALL-E)
These AI tools and art have been popping up on the Internet and have been generating a lot of buzz in the art communities online due to alleged copyright infringement. AI generates new images from existing ones that are accessible on the Internet. The user provides detailed prompts that describe the image they want to see, which may include the art type and style e.g. of famous painters like Van Gogh.
OpenAI Audio Models
The most popular use of AI-enabled audio generation tools is converting text to speech. The most common use case would be to transcribe speech into text and translate many languages into English, or even convert text into spoken audio. This can provide language accommodations to online services e.g. when users interact with chatbots on a company website.
AI-generated text content may be the most popular use case of GenAI. OpenAI’s ChatGPT has the capability to write poetry, song lyrics, essays etc. given a user text prompt. Grammarly has also provided live assistance in writing and also provides online paragraph rewriter services. On the other hand, there are also AI tools that detect plagiarism in essays e.g. AI Checker.
The perception of the impacts of AI and GenAI generally revolve around 1. Boosting productivity through the reduction or the elimination of repetitive tasks, and 2. Replacing and/or reducing the value of high-skilled jobs and deepening inequality where those who are slow or struggle to adapt will be left behind.
The potential productivity gains are promising but the displacement of labour that may happen in parallel with consideration to organisational cost. There are brewing concerns over customers or individuals at large not being able to distinguish between chatting with an AI or a real person, which may bring to question how impersonal customer service will be in the age of AI.
The potential of receiving personalised learning from AI may bridge the gap for students with varying learning needs may it be in the pace, the means (text or audio-based), the language. AI may also offer simplified summaries to complex concepts. On the other hand, learners who overly rely on AI may undermine their capacity to work independently and to think critically.
GenAI has the potential to reduce access to the availability and the quality of information in the case where the fine-tuning of the model is influenced by malicious parties with the use of bots that have pre-defined text content/prompts.
In the age of Generative AI, the party with the capability to host and process a significant volume of data (foundation model) earlier in the game for their business will have the advantage over those who adapt or utilise AI later. The sentiment is similar for the high-skilled individuals who have not been enabled or trained to work alongside AI - AI complementing human effort as opposed to replacing humans.
One concern that has been raised related to the rise of AI and the devaluation of high-skilled work is how people may eventually see little value in investing in a degree or certification for specific skills.
AI can potentially improve diagnostics and predictive capabilities, which can improve patient outcomes. An example would be using GenAI to read and analyse medical imaging e.g. X-rays or MRI scans. Another potential advantage is for AI to enable patients to manage their health more proactively through apps that are accessible outside of clinical settings.
AI is inherently neither good nor bad. It is a tool that has the potential to unlock opportunities for businesses not only in the context of productivity, or cost reduction but also in enhancing human capability and supporting human effort.
One common challenge among organisations is being able to retain knowledge and high-skill capabilities. In this context, people will inevitably move on. Given the organisation invests in its generative AI model, there’s an opportunity to fine tune collective knowledge that can be utilised to train and support other people in the organisation. A sample use case would be for service desk personnel where they are able to use AI to provide suggestions on resolutions based on the fine-tuned model.
Provided that there are different types of AI that fulfil use cases in a variety of industries, how would an organisation even begin to access AI?
There may be more accessible and more secure options in the future but this is something to consider in your organisation’s AI journey.
Is your business suited to utilise AI? Perhaps consider the known risks associated with this technology. Once upon a time, being able to collect customer information digitally was considered convenient but that also brought in data security not only as a concept but as compliance with legal requirements and tangible consequences.
Here are a few questions to consider:
AI capabilities are exciting and have the potential to transform businesses in a variety of industries. It can support a wide range of business use cases and elevate operations. On the flip side, the use of AI has tangible risks that businesses need to seriously consider.
Consider using an AI assessment framework or working with entities that can run AI audits to ensure that your business is on the right track to avoid costly compliance changes in the future.
Is AI the right solution for your business? Get in touch with Symphonic or head to our Strategic Service Pillar and we can help you navigate the complexities of implementing AI responsibly.
References:
Bank of America 2024, ‘Artificial intelligence: A real game changer’, n.d.
Capraro, V, et al. 2024, ‘The impact of generative artificial intelligence on socioeconomic inequalities and policy making’, National Library of Medicine, vol. 3, no. 6, pp. 191, PNAS Nexus Database.
Daws, R 2024, ‘Unreal Engine 5.4 brings animation, rendering, and AI upgrades’, Developer Tech, 24 April.
Georgieva, K 2024, ‘AI Will Transform the Global Economy. Let’s Make Sure It Benefits Humanity’, IMF Blog, 14 January.
GitHub 2022, ‘IBM/Project_CodeNet’, n.d.
Goyal, M, Varshney, S & Rozsa, E 2023, ‘What is generative AI, what are foundation models, and why do they matter?’, IBM Research, 8 March.
Heidloff, N 2023, ‘The Importance of Prompt Engineering’, heidloff.net, 6 March.
Hoban, L 2024, ‘Introducing Pulumi Copilot: Intelligent Cloud Management’, Pulumi, 12 June.
IBM 2024, ‘watsonx Assistant’, n.d.
Littman, M.L. et al. 2021, ‘How has AI impacted socioeconomic relationships?’, Stanford University, 16 September.
Martineau, K 2023, ‘What is generative AI?’, IBM Research, 20 April.
Martineau, K 2022, ‘Your ‘check engine’ light is blinking. What if an AI could tell you why?’, IBM Research, 30 November.
NSW Government 2024, ‘The NSW AI Assessment Framework’, July.
Puri, R 2021, ‘Kickstarting AI for Code: Introducing IBM’s Project CodeNet’, IBM Research, 11 May.
Stability AI 2024, ‘Introducing Stable Audio Open - An Open Source Model for Audio Samples and Sound Design’, 5 June.
Szczepański, M 2019, ‘Economic impacts of artificial intelligence (AI)’, European Parliamentary Research Service, July.
Wong, V 2024, ‘5 Best Social Media Post Generators’, Pikto Chart, 7 February.
Date Published: 29 July 2024
Copyright © 2024 Symphonic Management Consulting Pty Ltd - All Rights Reserved.
We use cookies to analyse website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.