Novantica is set in an open world environment with various interior scenes that asynchronously load in when the player is nearby. When I started developing the game, I was originally planning to use a top-down third-person perspective, like many of my favorite old adventure games. Because it was set in an urban environment, I also wanted the player to be able to look up towards the sky and get a sense of the urban architecture. I tried a few different approaches to shifting the camera based on how the player was moving, but never quite achieved what I'd hoped to. Eventually I decide to make the camera work like many third-person perspective games and allow the user to move the camera using the right stick on a controller.
This approach worked really well for the open outdoor areas, but made the interior scenes feel cramped and awkward. Many other games just leave things to the player to figure out, but I've never really liked how games did that. I can't tell you how many times I would stop in the middle of a cave in Tears of the Kingdom just to look around and try to understand where, exactly, Link was in relation to the cave. This free camera movement also made some of the puzzle mechanics in the Novantica awkward – e.g. making it difficult to see where a box needed to be pushed.
Then, I had an idea I wanted to test out...
Since Novantica is set in a futuristic city, I always wanted to make sure the game had a certain level of city life – people walking around, bots zooming by, drones buzzing, trams gliding, and bikes rolling along. I hadn't done much 3D modeling before learning Unity, and, while modeling props, buildings, and other blockish things didn't seem to difficult, I struggled with creating humanoid characters. I knew that I wanted something low-poly, simple, and a bit cartoonish for the game's characters, but I'll admit that character design is probably not my strong suit.
One of the more perplexing bugs I encountered working on Novantica so far was when all the NPC extras in the game suddenly turned a greenish-yellow color. I had just swapped out URP materials for ones using a new shader, so I suspected that change made this happen, but I didn't want to revert back to the old materials. To add icing to the cake, this issue was only happening in builds, not in the editor. I started to dig in, but didn't realize it would take so long to track down.
In March 2020, I remember the former mayor one day telling people to go out for dinner and see a show – and the next day hearing that it wasn't safe to get within six feet of another person. If you haven't been to New York before, it's not easy to walk around outside and not get within six feet of another person. In hindsight, it was relatively safe to be outdoors, but at the time, we were afraid to step outside the door for a few weeks.
I also remember a lot of people thinking that the whole thing would blow over in a month or two, but everything that I had been reading made me suspect that was far too optimistic. As someone who walks to get to 99% of the places I'm going and thoroughly enjoys it, I realized that I would have to pick up a few new hobbies to keep my sanity.
For a while now, I’ve been interested in the idea of creating portable, interoperable functional UI components that can work in any DOM rendering library, whether it’s React, Preact, hyperscript, bel, yo-yo, or some other library.
The idea of functional UI components is a simple one: pass arguments into a function and it returns a representation of the DOM, usually with encapsulated styles and interactivity handled with callbacks to a global state, a la Redux.
I recently read this excellent article, where the design team at Vox has devised a testing framework for new UI components introduced into their pattern library. While the methods they suggest are excellent, and what I’d consider something that should be industry-standard in our field, it got me thinking that this concept could be taken a step further. What if designers wrote actual unit tests for UI components? What if those tests were actually applied in user acceptance testing, A/B tests, and tested against performance metrics?
Everything in a UI is a component. This includes buttons, inputs, forms, promotional modules, pages, user flows, etc. I use the word component not only because this is how the underlying code is written in libraries like React and Ember, but also because pieces of a well-designed UI system should be composable.
React is a great way to generate static HTML with a component-based UI. One of the biggest hurdles to working with React is the amount of boilerplate and build configuration it takes to get going. I wanted to make it dead-simple to start building static pages with React and without the need to install tons of npm modules and configure webpack.
When it comes to designing for the Web I like to follow a handful of general principles. First, design for the medium, or as Frank Chimero puts it, follow “the grain of the Web”. The Web is fluid - based on screens and devices of varying sizes – and typography on the Web should reflect that. Second, design content-out, which usually means designing around a strong typographical base since the large majority of Web content and UI is text. And last, design with modular scales. Things built on the Web should be fluid and infinitely scalable. Using modular scales in a design compliments that idea and keeps things organized in the face of growing complexity.
Virtually every style guide has a color palette section in its documentation. Many times I’ve seen this documentation created manually, where every change to a color requires updating the values in two places – the stylesheet and the style guide. This often leads to one falling out of sync with the other, and makes maintaining a living style guide more difficult.
The problem with this approach is that the values are being defined in two different places. For a true living style guide, the code should serve as the single source of truth. Extracting color values from CSS can help keep documentation in sync, expose outdated colors, and point out opportunities for normalizing designs.
I’ve been dabbling with React for a few months now and using it in several small open source projects to better understand the technology. React’s focus on reusablility, along with the ability to install and require components via npm, provides an elegant way to rapidly build application UI in an efficient and consistent way. It’s also a great way to handle server-side rendering and provides high cohesion between markup and display logic.
CSS was first introduced as a way to reduce the complexity of using inline styles and to help separate concerns. After years of ballooning stylesheets with the same values being used over and over and losing sync, CSS preprocessors introduced variables to help keep values defined in a single place. Soon custom properties will be part of the CSS specification, which promises a native, more robust approach than what preprocessors can do.
While variables and custom properties make updating multiple instances of the same value trivial, we often still end up with multiple instances of the same property-value definitions spread throughout a global stylesheet.
Every once in a while I hear someone complain about the visual homogenization of the web, and front-end frameworks often get the brunt of the attack. This visual sameness isn’t necessarily a bad thing.
I’ll admit it.
I’ve been dabbling with HTML and CSS for years—building small websites for myself and friends and building prototypes to test designs. And, while I’ve been fascinated with the idea of designing in the browser for a long time, it wasn’t until recently that it’s become much, much faster for me than using traditional design software.
About a year ago, I wrote Hamburgers & Basements: Why Not to Use Left Nav Flyouts.
Since then, a few things have happened.
Your tiny type is hard to read – no, not hard to read, impossible to read. I carry my phone with me everywhere, but I always seem to forget my magnifying glass. I tap the Safari Reader button, but that’s not a solution to the problem. That’s a band-aid for your bad typesetting.
When opening an application, a user should be able to understand its functionality, see relevant content, and get to where they want to go. Applications that obscure navigation with the intent of focusing on content can make finding specific information difficult. On the other hand, skewing towards too much navigation can overwhelm the user. Mobile apps should balance navigation for users with different information needs.
While table views provide a clear and simple way to navigate certain types of content, mobile should be about putting content and user goals first and navigation second. Don't overload the user with navigation choices, show meaningful content instead. Even though tab bars are great – sitting below the content, out of the way until the user needs them – there are new opportunities to explore content-centric contextual navigation when designing for mobile.
"Good design makes a product understandable" – Dieter Rams
Good navigation should do at least three things well: (1) it should allow the user to navigate; (2) it should serve as wayfinding, letting the user know where they are; and (3) it should help the user understand what the product is capable of. If your navigation is not doing these three things, something's wrong.
I’ve been producing electronic music on my computer for about a decade now, and I don’t have a whole lot to show for it. After moving to DC from Shanghai, where I played a lot of live sets and DJ gigs, I realized there wasn’t much of a music scene in DC, and I stopped playing out. After a few years, I noticed that I generally wasn’t being inspired, and I wasn’t growing much as an artist. I also noticed that I had a tendency to never finish the tracks that I’d started. I was pretty good at creating catchy little loops, but they never evolved into anything beyond that.