This is a fan made “cinematic” style trailer made with AI/machine learning. why? Because I spent way too much time playing this game.
Generated with Midjourney, DeepFaceLive, Runway and Premiere Pro Trailer features deep fakes, image to video, AI generated images and some traditional animation with a brief introduction to the 4 protagonist of the game.
D’arce the Knight, Cahara the Mercenary, Ragv the Outlander and Enki the Dark Priest.
In Fear and Hunger, the four protagonist head into the Dungeon of Fear and Hunger to find the promised one, Legard, who is supposed to change the world.
They didn’t know the horrors and ancient mysteries that the dungeon held. They didn’t know how deep the dungeon went, or what mysteries it hid.
Hope you enjoy it, if you are interested in the game, it’s available on Steam, though content warning since it is a game meant for adults with a lot of mature themes.
Thank you very much for watching, let me know what you think! Subscribe for more trailers and lore from the Fear and Hunger games
AI generated art is one of the latest areas of the coming wave of the artificial intelligence revolution. Artist have staged protests regarding models like Lensa, OpenAI’s GPT3 and the Diffusion AI generation models, since they were trained on artist’s image and can generated images in the style of an artist, with no attribution or compensation. Video inspired in part by the Artstation artist AI protest
I believe the artist staging the AI is theft protest are correct, their work was used in training machine learning models. These models would crawl the internet, and scrape images along with the alt text as a training data set, and thus can recreate images in very specific styles.
AI generated art is the current area of discussion in terms of the continuing automation of the economy and industry? What happens to the people and workers that get displaced? What happens to the wealth that will be generated by more efficient businesses and industry? Who is best positioned to take advantaged in the marketplace of these new, and efficient tools and technologies.
Currently AI tools are limited by many factors, among them computational and processing resources, but these limitations will be over come with time. What happens when natural language processing bots can convincingly generate misinformation articles? What happens when spam bot getting better and more clever? In time, image and video processing will be get better, what will be the impact of easy to generate deep fake videos generated by discriminatory and generative neural networks?
My concerns is not just AI, but who will be in control in these high tech business tools. The coming of AI and machine learning will not change the structure of our capitalist society, where wealth has been shifting up to the wealthy more and more with the passing of the decades. I fear AI and machine learning will be used to accelerate the process of wealth accumulation by the wealthy and “elites”.
Images were sourced from reddit.com/r/aiArt, generated with OpenAIs GPT3 or using Tensorflow’s Fast Style transfer algorithm
I’ve always been fascinated with the concepts of Artificial Intelligence, and automation. Especially since my entire adult life I’ve worked as an Engineer in factories and manufacturing settings. As machine learning technologies develop and progress, we are going to be seeing more and more applications for AI powered automation into more creative fields, such as content generation and image/art generation. So I thought it would be fun to see how much of a Youtube video on a given subject could be automated, or facilitated with machine learning, artificial intelligence and Python.
OpenAI offers cheap access to their Davinci language model and their GPT3 image generation model (https://beta.openai.com/docs/introduction) via their API. I wanted to pick a topic I am not an expert at, and since the day I was working on this was the day of the FIFA world cup, I thought that’d be a great choice for a test topic.
I used the DaVinci language model to facilitate the production of a script. Since the responses are limited, and the tool is not meant for long broad answers, I asked the DaVinci model a series of questions: “What is the World Cup” , “What is FIFA”, “When is the World Cup”, “How is the World Cup organized”, “Where is the World Cup”, “What the winners of the World Cup receive” and so forth. Once I removed the prompts I had asked it, I had a bout 700 words for a quick video essay of a few minutes.
Having a script, I ran it through a NLTK Natural language processing (NLP) which took the script, broke it down into sentences and placed them in a Python list, that I could iterate a loop over.
(Note, there is a small cost associated with these AI services, see OpenAI’s website for info, but it’s pretty cheap).
Using OpenAI’s API, I set up a for loop for each line in my Python list of sentences from the script that the DaVince Language model produced. The loop would feed the OpenAI image generation model one sentence at a time, and save the image that is generated. In the end, I would end up with about 30 images for my video, one for each sentence in the list generated from the script. Each image at 1024×1024 for the OpenAI image generating neural network cost about 1.5 cents (or $0.015) so 30 images at the largest resolution cost me less than a USD.
So now I had a script mostly generated by DaVinci (Did do some clean up and streamlining to make sure it flowed) and a series of images for my video essay on the FIFA world cup.
Next I used Google’s Text To Speech (TTS) service to generate a voice over with the script, and now I had about a 3 min audio file based on the script.
Loading up Premiere Pro, I cleaned up the audio since there were inconsistent pauses at paragraphs, set the narration to my generated images. The images mostly lined up with the prompt that was used to generate it in the video. Exported, and voila! I had a 3 min and change video that was mostly generated by algorithms or Artificial learning.
In conclusion, using AI and machine learning are very powerful tools to streamline the creation of content, correcting errors and speeding up portions of the workflow, saving people tons of time. Having said that, human review and touch is still needed, otherwise the content will lack that je ne sais quoi and authenticity that most people like from their small content creators. The images were hit or miss, but always interesting in trying to see how the model got trained, and what I generates based on the input.
I think most creators will have to face the options of eventually using some of these tools and techniques, or face being left behind with time. But the human element will always be needed so that the content is not sterile and lifeless.