Building a better UI workflow in Unity

A lot of people ask us about our thoughts on Unity as an environment, and I thought I’d share a bit of one system I’ve been working on recently.

Typically, we have only good things to say about Unity – however, our common complaints relate to the UI system(s).  The 2.0 immediate mode OnGUI is quite configurable, but it gobbles performance – and isn’t artist friendly.  It can be very powerful, but layouts are complicated and unintuitive – and did I mention it eats performance?  While we use it heavily for editor GUI development, we consider it unusable in a game runtime environment.

Unity 1.0 GUITexture and GUIText on the other hand are simple to layout and use, and performant, but not nearly feature complete.  I’ve worked with sprite based solutions as well, but hate the frequent pixel imperfect rendering and aliasing issues – along with the inability to display Unity’s super-cool dynamic fonts – essential for the display of asian character sets.  Sidenote to devs at Unity – Feature requests here: Please expose the UVs of dynamic fonts for consumption by alternate UI display libraries.

What we really needed, was the simplicity of 1.0 GUI layouts – with the power of Unity GUI 2.0.

Enter my most recent custom GUI layout solution, built entirely on top of 1.0 GUITexture and GUIText.  This system could easily support alternative rendering methods, such as a sprite system, however, I’ve chosen to use 1.0 GUI.  Without further ado:

Unity GUI UITransform
Unity UI, herpa derp!

 

The UITransform component is the workhorse of this system, a custom component we created which enables the construction of an alternate UI scenegraph right inside the normal scenegraph.  It tracks pixel and parent-size-relative positions, as well as pixel and relative sizes, UITransform parents, and UITransform children.  It also exposes an anchor property, depth property – and event delegates for all sorts of common UI behavior, hover, active, release, etc.  Having event delegates alone better enables properly bridging game functionality from UI implementation – essential in large projects with multiple team members, and iteration upon iteration on the front end UI details.  The math is very straightforward – and the most important part is keeping the code clean, so you can properly subclass and override.

UITransform is the base class for all the additional display functionality – and allows me to wrap up all sorts of annoying incongruities in Unity’s UI.  For instance, how the Color property on GUIText is different than the one on GUITexture…  Yeah, killed that one.

Here is my UITexture component, which derives from a UIGraphic, and in turn UITransform.

Many knobs and dials.

Once we’ve exposed virtual functions and delegates for hover and active, you can see we can do all sorts of neat stuff, like expose override colors and textures for hover and active states.  Unity 2.0 GUI does this too, but has weird behavior, like requiring you to insert a null texture to get hover colors to work properly.  Along similar lines, I’ve exposed built in methods for FadeIn / FadeOut, and Activate / Deactivate, which propagate down the UITransform tree.  Because of the parent – children caching, that propagation just requires iterating over some lists.  Other functionality we’ve neatly packaged is word wrap (based on language), automatic enable / disable of components, and click detection culling (based on only the active components).

You’ll also notice, in the debug, I’m caching everything in pixel space.  So each element keeps track of parent relative, pixel absolute, and world absolute position and size.

Here is the performance of a typical scene – in a very large game.  There are about 20 elements on screen, with nearly 2000 deactivated, representing different UI pages.  Notice the 2 OnGUI text fields are the biggest killer right now (GUI.Repaint).  Fortunately, as soon as the text fields are disabled, we recover that performance.  The 56B of memory allocation are Unity’s input event system.  Nothing to be done there.  Generally, both this solution and Unity’s 2.0 GUI are O(n), however, element for element, our solution is roughly 4 times faster – a speed difference which even matters to us on mid-level PCs, let alone iOS.  Put 100 elements on screen, and suddenly these numbers become meaningful.

Always need to double check

There are some drawbacks to this approach, and depending on your requirements, can be problematic.

  1. Rotation of elements is out.  GUIText and GUITexture are 2D elements, and cannot be rotated.  Too bad.  We can only rotate textures the way we would on CSS2, sprite sheets.
  2. Draw calls.  Sprite based systems can effectively reduce the number of draw calls by atlasing.  This is important on certain devices.  We have as many draw calls as active elements.
  3. We still need the OnGUI system for text input, at least if we want to support localization.
  4. Materials – I’d love to be able to change the materials on the GUITextures, the way you can on GUIText.  Along with that, exposing the border properties in code would be handy – ::end feature request::

For future improvement, the last thing I’d like to do is fully wrap the text input scheme, which, given the new direct access to the IME functionality in Unity 3.4.1, should be possible.  This would allow us to completely remove calls to OnGUI, and further improve performance.

All in all, if, like us, you’re building a UI dependent game in Unity – it is worth spending some time wrapping up their UI functionality with layout code that handles layout the way you expect.  I find it’s very helpful to give UI designers a simple path to integrating assets, with WYSIWYG parity between Unity and Photoshop – and speedy layouts.  All of this depends on a solid foundation – something which is easy to cook up with a little planning.

Looking pretty sharp