Bringing Splinterlands Lore to Life - Steps By Step Guide

avatar

lore-tutorial.png

Hey fellow Splinterlands addict!
Thank you for supporting my experiments on the Splinterlands lore videos; I am overwhelmed. I will be continuing to work on creating more videos and sharing them with you all to get your feedback and work on improving them.
I was very pleased when the lore videos caught the attention of @bravetofu and he was interested to know about the process I follow to create the videos. @achim03 and @unitmaster too showed interested in the video generation process.
My article today will be focused on explaining the process and the challenges faced to bring our beloved Splinterlands lore to life in the form of audio visuals and this process I will be presenting is self taught with youtube videos and a lot of trial and error and I am sure that there are better ways to do this.

Step 1: Breaking Down the Lore with ChatGPT

The lores in Splinterlands are in lengthy texts and written in a manner that needs close attention to understand the story properly. My first task is to break the story into smaller visual scenes.

  • I use ChatGPT to summarize and structure the text into scene prompts.
  • Each scene must have a clear visual element (a character, location, or action).

Example:
In Doctor Blight’s lore, there’s a part describing how “the seas themselves shifted, bringing alien currents with strange fish.” I turned this into:

“Dark, twisted ocean waves with strange glowing fish swimming in unnatural currents.”

And this helped me to find the right image for the scene in the next step.

Step 2: Generating Images with DALL·E

Once I have the prompts, I feed them into DALL·E. This gives me illustrations for each scene.

Example:
For Blight’s backstory, I generated:

  • His workshop filled with vials and alchemical gear.
  • The ocean scene with strange currents.

Step 3: Animating with Pixverse

Static images don’t tell a story. That’s where Pixverse comes in. It allows me to add motion effects like flowing smoke, moving waves, or camera pans.
I use Pixverse to animate the static images in two different approaches:

Image-to-Video Animation:
With this feature, I convert still images into short video clips by adding motion effects such as zooms, pans, or environmental movements. This gives each scene a sense of depth and dynamism, making the visuals more engaging than simple static frames.

Transitions Between Scenes:
Rather than cutting abruptly from one image to another, I use Pixverse to create smooth transitions. Fades, dissolves, or motion-based transitions help connect scenes in a natural flow, so the story feels continuous.

Example:
For the ending scene I used the following image and prompt

DALL·E 2025-09-11 13.53.53 - In a devastated jungle landscape, Alastair has fully transformed into Doctor Blight. He stands tall and menacing, now wearing his plague-doctor inspir.webp

Cinematic gothic horror in teal and gold. Extend the scene: a plague doctor figure with glowing green eyes and lantern stands before a smoking volcano. Green mist thickens, crawling across the jungle floor, curling around graves and skulls. The jungle blackens and withers as if consumed by rot. Crows wheel overhead, their shadows stretching unnaturally. Slowly, the figure fades into the mist until only the glowing lantern remains, then it too dims. Ash drifts across the sky as the land lies dead and silent.

And the image turned into this...

Step 4: Narration with ElevenLabs:

For the voiceover, I use ElevenLabs to generate narration. To do this, I simply paste the lore text and and convert it to a narration. There are different narrators available, I usually go with a narrator that describes the character I am making the lore for best.

Step 5: Editing & Assembly in Canva:

Finally, I bring everything together in Canva:

  • Import the animated scenes.
  • Sync them with the narration.
  • Add background music (mystical or eerie, depending on the story).

And that's it.
If you have not checked the video of Doctor Blight, do check it out and let me know your feedback.

(Unsupported https://youtu.be/yqwWa13eBDE)

Challenges Faced

Creating these lore videos is exciting, but it comes with its own set of challenges.
One of the biggest is inconsistency in AI outputs - sometimes the same character looks different across images, which breaks continuity. Animating with Pixverse also has risks, since too much motion can distort faces or objects, not to mention the high token cost to generate each video. Voice generation with ElevenLabs can sound robotic or rushed on the first attempt, requiring fine-tuning of pacing and tone. Finally, the editing process in Canva can be tricky; syncing narration with animated clips takes a lot of trial and error.
Despite these hurdles, I’ve learned to work around them by refining prompts, regenerating assets when needed, and experimenting with timing until everything feels natural.

I hope walking through this workflow was insightful. Do leave a comment if you have any questions for me.

Until my next article...



0
0
0.000
6 comments
avatar

This was informative, thanks! There are so many AI tools out there and the options seem to be increasing and changing frequently. I like what you're doing so far, looking forward to your next video.

0
0
0.000
avatar

Thank you so much.
I agree, there are too many tools out there to choose from these days.

0
0
0.000
avatar

Thats nice work, i am guesing the next article would feature the epic water 7 mana reward card that has 3 ranged attak, 4 speed and 7 health at max level which has also doublestrike, closed ranged and cripple ability.

0
0
0.000