Working in One Bit


Working in 1-bit for this game has been interesting. Going in, I thought my experience with Obra Dinn would let me cruise through the visuals. Typical devlog setup, so let me describe in detail how wrong that thought was.

Turns out that rendering 3D content in grayscale with realtime post-processing to 1-bit is a totally different thing from making well-crafted 2D art. Most of my existing skillset and pipeline has been almost useless.

Some reasons why my previous encounters with 1-bit haven't translated well:

1 With 3D content, moving the camera around is a critical part of understanding the scene in front of you. With 2D, you need to be able to quickly and easily interpret only what you're seeing on screen right now. Also the Playdate screen is 2 inches across, so legibility is already a challenge.

2 The handling of 1-bit generally and the art style specifically in Obra Dinn was built on technical engineering-based solutions in pipeline features, geometry handling, and post-processing. Getting the look I wanted involved lots of programming and a comparatively little bit of art. For 2D on the Playdate, there's no geometry, no post-processing, and no pipeline complexity. You really just make 1-bit images and the hardware blits them onto the screen. Without adding some stupidly over-engineered programming steps I'm honestly a little lost here.

3 While I was careful with dithering in Obra Dinn, working and presenting only 2D art makes dither patterns an even bigger issue. The small physical size of the screen means that dithering actually works well to re-create grayscale tones but stylistically I don't think the best use of 1-bit is to simulate 8-bit grayscale. I'd rather make something that's enhanced by the limitations rather than pressed up against them. And that means really thinking about each and every pixel. At 400x240 it's just high enough resolution to make that a lot of thinking.

4 As a specific hardware issue, the Sharp screen suffers from strobing when flipping pixels on/off. This isn't normally perceptible except when scrolling a dithered image. The consequence is that any image that will move needs to either have a static dither pattern applied post-move (hard) or be more carefully designed to persist its pixels as much as possible in the direction of movement (less hard).

5 The tools for working directly in 1-bit aren't that great. Using grayscale is much, much easier in every single modern content creation tool. I made an effort to use classic Mac System 7 tools but let me tell you a little story about unlimited undo: I need it and those old programs don't have it. With modern tools like Photoshop and Blender I have systems and scripts to manage and preview images in 1-bit but it's just clunky enough to keep me unsatisfied.


For Example: Pie Scene

There's pie in this game. I only barely know why. Another one of those bright ideas that could work but also maybe not. Anyways there's pie right now and building this one scene of a pie sitting on a table is a nice illustration of my challenges with 1-bit in 2D.

After ruminating on it for a bit I concluded that obviously the best way to do this was to model it in Blender and render it out. I need multiple variations of the scene and just moving the objects around in 3D and re-rendering is super easy.

pie-BlenderScene Creating the scene in Blender

Blender is a great program. I wasted years turning the grindstone in Maya for Obra Dinn and everything I did with that painful old app is easier and more sensible in Blender. It's not perfect but let's be realistic here. A proper endorsement: If you're working in 3D, use Blender.

My goal was to take my previous 1-bit 3D techniques into Blender's renderer. It was surprisingly easy. First, Blender's got a great material node system. This let me apply vertex colors to define how objects should create outlines.

pie-BlenderMaterial Blender's material editor

Next, it's got an amazing Compositor. Again node based, you can process the rendered image however you want before generating the final output. Very simple to split out the lighting/shadow pass and to reproduce Obra Dinn's edge detection logic.

pie-Compositor1 Compositor nodes for edge detection

There's even the incredible ability to define per-object parameters (Shader AOV) that come through as masks for compositing. Maybe that doesn't sound impressive but let me tell you that shit is beyond handy. In my case, it allows me to set a property on each object that defines which dither pattern is applied to it in the final image. Obra Dinn's renderer had one bit reserved for one special object type -- Blender gives me infinite bits for as many types as I want.

Put that all together and I get this:

pie-BlenderRender

Whoops, I hate it.

If you could move around here in 3D it might come together, in a way. But this is 2D and what you see is what you get and what you get, here, sucks. The result feels like something created in a much more powerful environment (true), scaled down, processed, and squeezed into a static image (all true). There were a few old Mac games created this way and I never quite clicked with that style.

A little desperate, I tried different changes like roughing the shapes up, introducing small errors -- stuff make it feel more hand-made. Of course none of that worked so the next step was to give up all my special engineering shortcuts and use actual hands to make it.


Actual Hands

I ditched Blender and just sketched the scene in grayscale using Procreate for iPad. Drawing perfect ellipses is awkward, straight lines are easy, and the pen/eraser/layers/undo/etc in Procreate are all excellent. Patterns are done by importing a full 400x240 image of the repeating pattern and masking the layer.

pie-ProcreateDraw

Dropping this directly into 1-bit isn't half bad...

pie-ProcreateOnDevice

...if a bit basic. Needs some precision and extra details. So over to Pixaki, also for iPad, to retrace everything. Pixaki has great ellipse support, worse layering, and is overall better tuned for low-resolution pixel work. No pattern fills though, which is a drag.

pie-Pixaki Retraced and cleaned up. Patterns drawn by hand.

And finally into Photoshop for more layer-based patterns and a perspective distort on the tabletop.

pie-Photoshop Added details and patterns

pie-PhotoshopOnDevice

Ok that feels like a better fit with the limitations. The perspective on the objects is a little off everywhere, which should make it clear that no one with excessive drawing skills was involved.

Even though I used three different apps to get the end result, drawing it by hand was just slightly quicker than modeling the scene in Blender. Makes sense but the plan is to have this pie get eaten gradually so the balance will shift as I draw more variations.


Why the Trouble

I had a wonder about why my first instinct was to model this relatively simple scene in 3D, and not to just draw it by hand. Surely most artists would just draw it. Partially because I'm just not a super great artist. Mostly though I think the answer comes from how I approach production in general.

Basically, I don't have the bandwidth to draw lots of detailed stuff while also doing everything else to create the game. I'm always looking for efficiencies and shortcuts that explicitly avoid the "just buckle down and power through it" techniques you need for most good art. I'd rather work out a stylistic shortcut that saves me time. I've done that successfully in other places for this game, it just didn't work here. The idea that authoring in 3D would make the variations easier couldn't overcome the lame result I guess.

Whatever lesson I'm learning will hopefully stick with me for the rest of the project. There's very few platforms that are more constrained than 400x240 1-bit, so even the "power through it" stuff isn't that bad.

Comments

Log in with itch.io to leave a comment.

I'm a huge fan of Obra Dinn. Got to get this out of the way first :). And it's always great to see your thought process. It's funny because I'm working on a Playdate game too, and I was tempted to over engineer everything at first but then realized that at the small resolution in one bit, you may just be better brute forcing your way through.

I've seen that you mention ios Apps for pixel art and I thought I'd share Art Studio Pro, which after many hours of research, is to me is by far the best suited on iOS for pixel art. It has all the fancy layers, multiple undos, sophisticated masking and color tools of Photoshop, but it's also managing pixel perfect brushes flawlessly, which I found is a problem in Procreate. Art Studio Pro even has a Nearest neighbor algorithm for layer transforms, which Photoshop doesn't.

Here's a few screenshots: 


Sky shapes by masking a dithering pattern


Nearest Neighbor Rotation yay

I recently started testing ArtStudio Pro out (maybe from one of your earlier recommendations) and you're right, it's excellent. For patterns I prefer using layer effects, which it supports just like Photoshop so all happy here.

Those screens look beyond great btw.

Glad you like it. Definitely keeping my layers for patterns too. I find them especially  useful when you have pattern transitions.  Thanks much for the kind words ;)

(+1)

Having read through your entire Papers, Please and Return of the Obra Dinn (sea shanty noir) threads on TigSource, it's kind of awesome to be caught up for "season 3" of dukope.

(+1)

I landed on a similar workflow (Procreate -> Pixaki -> Photoshop), after trying to approach it from the opposite end. I was originally trying to directly convert high-res drawings, but found the hand-drawn touches I was after just made everything look sloppy: Pixel Art Workflow

(+1)

Thanks for sharing this, it’s really interesting to see your process and great to see you using Pixaki! I’m glad you like the ellipse tool. Is there anything you’d specifically like to see to improve the layering? I’ll see what I can do with adding pattern fills too. Thanks! Luke (creator of Pixaki)

Cool to see you here, and thanks for Pixaki! For layering, #1 request from me is groups, #2 is masks.

(2 edits) (+1)

Very insightful! As a solo dev, I sometimes struggled to get that scene just right, and seeing your process laid out like this makes me a lot more confident. Please keep the devblogs coming!

(2 edits) (+3)

We do a lot of these sorts of 1-bit drawings on the Macintosh. It took us a while, but we ended up building a little program in THINK Pascal for Macintosh System 7 that lets us build little 3d wireframe worlds that we can after trace over.


It might be a good way to mix both if you need to draw a complex scene.

(+2)

Well those look incredible. Great style that works perfectly in 1-bit. Checking your itch page, all of your design work is amazing so I'm now a fan.

super interesting thanks for sharing 

(+3)

You may think this is a reach... but

Your 1-bit 3D programming could be of great use to blind people.

There is ongoing research where a gridwork of electronic stimulators are inserted surgically in between the two occipital nodes in the back of the brain (this is where the optical nerves go to).

blind test subjects who have never seen report experiencing bright lights and patterns.  Since the technology today only allows for on/off pulses in these gridwork arrays, everything the blind person "sees" is _1_BIT_ deep.


its possible that if this tech becomes fast and good enough you could help blind people to clearly see the 3D world around them using glasses equipped with LIDAR and 3D AR processors.

Just food for thought...

(+1)

Interesting. For Obra Dinn I felt the outlines were key to legibility and wanted them to be as clean as possible. Which meant the relatively art-heavy burden of marking each object with a special color to define how it should generate edges. You wouldn't get that with realtime video of a live scene, so you'd need to tease it out of depth&color. That's how most 3D edge detection works anyways so it's not really a problem, just a little less precise than what I wanted.

(+1)

There are some pretty good AI solutions for foreground/background tagging of video, perhaps a solution like that could be extended to separate objects into their own groups. Something like an Nvidia Jetson carried on the belt could be feasible, idk.

(+1)

Thanks for sharing your take on this. I've been enamored by old Hypercard games and 1bit art of the old Mac era (Cosmic Osmo especially), but I'm not thrilled by the thought of using MacPaint for everything.
If you do find a smoother workflow for 1bit art I bet the future Playdate game makers (hopefully myself included) would be overjoyed to hear it.

(2 edits) (+1)

Whilst the single level Undo is definitely a thing (Photoshop didn’t get it until v5!) paint apps on Macintosh got much, much better than MacPaint over the course of the 1990s. The one I use, Deneba artWORKS, allows mixing vector and pixels, has layers, dither fills, and more: https://blog.gingerbeardman.com/2021/07/30/playdate-1-bit-illustration-postmortem/

I also helped Lucas out with my solution to running Macintosh on iPad. You can read about that here: https://blog.gingerbeardman.com/2021/04/17/turning-an-ipad-pro-into-the-ultimate-classic-macintosh/