A UI System Architecture and Workflow for Unity

[2019 update] Source code released!

I’ve released a homebrew version of this architecture, which you can get at my github page. There’s also an examples repository, and a live demo. It’s pretty much the same, except that in there I refer to the “UI Manager” as an “UI Frame“, and the “Dialogs” are now called “Windows“. Other than that, the original post below should still have helpful information.

Foreword

In mobile games, especially F2P, there’s no escape from developing a metagame, many times more technically complex than the core loop itself. This means one thing: UI work. Lots of it.

A lot of developers frown on the idea of doing UI development. More often than not, admittedly, there’s a lot of repetitive, less interesting things to be done – and also way more often than not, there isn’t an understanding of the nuances required for proper UX, which frustrates a lot of programmers when having to iterate a lot on seemingly minor things. Add to that the countless possible architectures that are proven, loved and hated, which usually spawn religious arguments upon using MVC, MVVM, XGH… You probably know the drill.

Now is the point where an article would usually pull a XKCD and say “fear not, for the ultimate answer is here!”, but not only I don’t believe in silver bullets, this also isn’t my first rodeo in the regard of proving myself wrong. That’s why I’m not talking about UI code architecture holistically. My focus will be disclosing one specific way to do things, which is good enough for medium-to-high complexity UI and was battle tested in released mobile games. That said, it’s most likely too complicated if you simply want to display a few UI elements here and there, and there isn’t complex navigation etc. The main idea is encapsulating complexity within the “core” so the end-code is highly agnostic and simple to implement.

In a nutshell, this architecture is simply a “window manager” of sorts, with history and flow control and an accompanying workflow and general guidelines on how to organize things. This means it’s easy enough to adapt it to any approach. If you want to go for single compound “view-controller” classes or go full StrangeIOC (although why would you ever), all or most of these ideas should work. So read on and pick yer poison.

 

(Seriously, why would you)

(maybe you’re into steampunk and have a strange attraction for boiler plates)

(not judging tho)

 

 

(don’t use Strange)

Acknowledgements

I built this system for Blood Runs Cold. On the code-side, I’ve simplified some things and added some improvements, but a lot of the structure is inspired by a design used by Renan Rennó when we worked together in Legends of Honor. One trick regarding how to make setting up transition animations artist-friendly is derived from my time in Aquiris working on Ballistic. The trigger for writing this post was a short chat on twitter with Nikos Patsiouras and Steve Streeting, so if you find this useful, thank them as well!

A bit about Unity UI

Unity’s UI is pretty awesome, but it can also be pretty shitty: it comes with a bunch of stuff out of the box and layouting is super powerful and easy once you get a grip on how it works. It can, however, drive you nuts with all the little hidden details and a few questionable design decisions. You should watch this Unite 2017 talk (from 23:51min on) for some nitty gritty details.

You’ll sometimes really want to write your own UI components, and you sometimes should. Nordeus for example wrote an entire custom UI hierarchy system to optimize reparenting in the hierarchy – which in the  near future possibly won’t be that much of an issue anymore. But remember it’s all about cost and benefit, and that if you’re frustrated about how something works (or why something is NOT working), you might jump to conclusions too soon and implement something that may already be there, out of the box.

If you’re making your own code, compose, don’t inherit. It’s pretty easy to extend the base UI classes and add extra functionalities piggy-backing on all the existing default UI components, so a lot of people tend to first think about inheritance. However, there’s a great chance that you’ll end up having to extend more than one class later simply because you’re not using a native base component. An example I’ve been through in a project was a component for localized text someone had made. It was pretty neat, and had a nice editor, but instead of being a separate component, it extended UnityEngine.UI.Text. Further down the road, we ended up with 2 or 3 more classes that also had to have a “localized” version. Had it been a separate component, we could probably simply slap it in anything without having to worry about how other parts worked.

Canvases define what parts of your UI needs to be rebuilt: if you change something on the hierarchy, even position, the whole mesh up to the bottom-most parent Canvas is rebuilt. This means if you have a single canvas for your whole UI, if some small textfield with a number is being rewritten every frame on some Update loop, your WHOLE UI will be rebuilt because of that. Also, every frame.

That said, you probably don’t want to update things every frame. Rebuilding the UI is pretty expensive and generates a quite a bit of garbage. Try to update things via events only when things change, or try to have things that need to update every frame inside their own canvas.

Ideally, you would even have all the static elements in one canvas, and all the dynamic ones in another one, but that’s usually not possible: it’s again about weighting the sweet spot between optimization and an easy workflow.

Use anchoring instead of absolute positioning. Having things responsive will make it easier for you to support multiple aspect ratios and resolutions. That said, make sure you also set up your reference resolution on your CanvasScaler from day 0. Your whole layout will be based on it, and changing it after things are done will most likely end in you(r UI artist) having to rework every single UI in the game. I usually go for 1920×1080.

Last but not least, use transform.SetParent(parent, false) instead of assigning to transform.parent: this is a very common mistake and the symptom is your UI elements looking fine if you drag and drop them in the Editor, but getting all screwed up when you instance them in real time. I can’t tell you how many times I forgot about it in the early days, and how many times I’ve seen code resetting position and scale and whatnot to make sure things don’t get all weird.

With this out of the way, let’s get down to business.


Glossary

To better understand the rationale behind this system, some terminology needs to be defined. Remember these names are completely arbitrary, they are just names that were deemed “good enough” to represent something – the imporatnt thing is understanding the concept behind them.

We’ll use this wonderful mockup I made on Google Slides as an example:

A Screen is any self-contained part of a given UI. Screens can be of 2 types:

  • Panels: a given chunk of UI that can coexist at the same time as other pieces of UI. Eg: status bars, elements on your HUD.
  • Dialogs: a Screen which is the main point of interest at a given time, usually using all or most of the display. Eg: popups, modals.

You can have multiple Panels open, even when a Dialog is open. You can have more than one Dialog visible at the same time, but only one of them is interactable (eg: you can have a Dialog open, but darkened out and blocked by a popup that is interactable on top of it). Panels either are open or closed at any point, but Dialogs have a history.

A Widget is a reusable part of a Screen. It can be displayed visually in several ways, but you most likely will have a single component that drives it.

A Layer is responsible for containing and controlling a specific type of Screen.

Using our example again, here’s a possible way of dividing it with those concepts:

(1) is the Level Select Dialog, which is the main interaction element for the user at this point. However, you can still see (2), which is a Panel to display how much currency the user has, and even interact with (3), which is a navigation Panel. If you clicked one of the navigation buttons, (1) would be replaced with the target Dialog, eg the Shop or Options screens, but all the rest would remain the same. Last but not least, (4) would be a Widget: it’s a part that you’ll most likely instantiate dynamically several times, and could even be used in other parts (on an Achievements Dialog for example).

How to define what should be what and how granular should the division be? Well, common sense – or as we say in Portuguese, “good sense”. Remember the code overhead between doing a Screen (which needs to plug into the system, be registered etc) and a Widget (which is simply a component). If a big chunk of your display has contextualized information that should disappear as you navigate away, that’s most likely a Dialog; if it’s always there and cares little for navigation states, it’s a Panel. If it’s a part of something bigger, it’s probably a Widget. Try looking at some other games and looking out for these behaviours to get a better grip on the concept.

The UI Manager is the central control spot for everything. If you’re on the school of thought that “omg managers are monolithic super classes and you should avoid them like the plague!” just… use whatever name makes you feel better. This should actually be a very simple façade and the real meat is on the Layer code. You can even split this into several Commands, or however you like to do things – just make sure the code is localized in a way so the rest of your game can easily communicate with a single spot on the UI system. Never ever access Screens or Layers directly.

Fun fact: Blood Runs Cold is a Narrative-driven game and the narrative designers were using the term “dialog” for narrative conversations. I ended up naming the classes  “Dialogue to differentiate the narrative systems from “Dialog(UI), which in retrospect, could have used a bit more thought, as when referring to them out loud we said a fair share of “I mean narrative dee-ah-log-oo-eh, not UI dialog”. Even more fun fact: the dialogues were displayed in a Panel. 😅

Hierarchy

This is a rough example of what your UI Hierarchy would look like (brackets for which components they would contain):

  • UI  [UI Manager Code, Main Canvas]
    • UI Camera
    • Dialog Layer [Dialog Layer Code]
      • Dialog A   [Dialog A controller]
      • Dialog B   [Dialog B controller]
    • Panel Layer  [Panel Layer Code]
      • Panel A    [Panel A controller]
      • Panel B    [Panel B controller]

As you (should) know, Unity sorts elements based on hierarchy order, so lower elements get drawn last. In this specific example, Panels are always drawn on top of Dialogs.

When setting up your main Canvas, use “Screenspace – Camera“, not “Screenspace – Overlay“. It behaves the exact same way (besides the extra camera) and you can easily have things like 3d models, Particle Systems and even postprocessor FX.

Organizing your screens and widgets

One thing that would make everyone’s life easier would be nested prefabs. Unfortunately, that has achieved meme-status (but you can always check to see if this article is outdated). Some people like the idea of using a scene to store Screens (and then every element can be a Prefab). I’ve never personally done this, so I can’t assess if it’s better or worse. What I usually go for is having a single prefab per UI screen, and several prefabs for widgets. This obviously calls for some proper communication to avoid having any merge conflicts, but we never had any major problems. Just make sure that, whatever you do, try to split things into as many prefabs as you can.

Fun fact: once there was a minor redesign in the Ballistic UI to change the color of all purchase buttons from yellow to blue, but there wasn’t time for anyone to make a tool for that change. We had an intern reworking all of them manually for a couple of weeks. Sorry, Luís!

Ideally, your UI artist should work within Unity, and should be responsible for rigging your UI. The engine gives us access to a pretty artist-friendly toolset out of the box, so don’t waste development time making slightly offset programmer-art-y things that will trigger artist OCDs if you can focus on coding. If the artist doesn’t know how to use Unity, help them learning and enable them, and plan for this in your pre-production and early production timelines. If you think UI artists are not smart enough and can never be trusted to touch the repo, stop being such a fucking snob. If you’re an UI artist and you think you can’t/shouldn’t do it, embrace the challenge and remember that like anything else, practice makes it perfect.

You can make a suite of editor tools for your UI artists, but whatever you do, do it WITH them: they are the ones who will work with it, and they will develop their own daily workflows. That said, the tech team is the gatekeeper and will define the policies for the repo, assets and guidelines on how to assemble and rig things, hierarchy and performance concerns. This will most likely also include naming conventions and it’s helpful to define those as early as possible.

Fun fact: back in Legends, I made this Editor window where you could preview and create widgets, and could easily add, remove and replace pieces of UI prefabs. It was pretty neat. Also, it was never used by the UI artists. They just copy pasted their way through and worked fast and comfortably enough like that.

 

Regarding assets, you’ll definitely want to atlas your Sprites. This is its whole own can of worms, especially when dealing with Asset Bundles, even more if there’s variations involved (eg: HD vs SD resolution assets). I usually use Unity’s Sprite Packer and organize all the sprites that go in the same atlas in the same folder, so it’s easier to make sure all of them go to the same Asset Bundle (otherwise you might end up with copies of your atlas sprinkled all around).

Workflow-wise, I recommend iterating on external mockups as much as possible before jumping into Unity, using tools like Balsamiq, Axure, InVision, Origami Studio (free) or whatever your UI/UX designer prefers. I’ve even seen people using Powerpoint to better communicate their ideas interactively.

After the mockups are ready and approved from Game Design for the first implementation, your UI artist can start assembling the Screen prefab. When that is done, the dev team can pick it up and implement it. When everything is working properly, you can pass back to your UI artist again to do any tweaks.

If you don’t have enough time to do the complete pipeline (we usually don’t anyway), you can always do crappy programmer-art versions to start implementing all functionalities you’ll need on the dev side, then substitute it for the final prefab later. In an ideal scenario, you’d have an UI artist quick enough on their feet to do the re-rigging for you if they change anything.

Code architecture

The UI System code has 3 parts: the main façade, layer controllers and screen controllers. The idea is that each layer can treat its screens in any way it’d like, which means a layer will group things by functionality. So far, I’ve been able to represent all possible cases as either Dialogues or Panels.

In the hierarchy example, I had 2 layers. However, in practice, you’ll most likely need more than that. You could, theoretically, create several layers, but then you’d be having multiple things that actually work kinda the same way (which would most likely create the need for extra code in the façade, which you don’t really want). My way out of this was actually creating Para-Layers: they’re simply extra objects in the hierarchy to which the Layer code will reparent Screen transforms to. During the reparenting, you might also treat some special cases: in BRC for example, we had a darkener object that would be activated every time a popup appeared, and some things would get re-organized in the hierarchy. We had one para-layer for Dialogs (used for making sure popups were always on top) and several para-layers for Panels (one for regular UI, one for blockers, one for tutorial elements etc).

All Layer controllers derive from AUILayerController. This class has all the boilerplate for showing and hiding Screens, and communicates with the Screens themselves. The PanelLayer is really simple: it doesn’t really do anything, it’s mostly just there to strongly type that it handles Panels and route to proper para-layers. The DialogueLayer, however, has a lot of functionality for controlling history and queueing. As a comparison point, the Panel Layer is 65 lines and Dialog Layer 217 lines. Here’s a crop of the base class:

public abstract class AUILayerController<S> : MonoBehaviour where S: IUIScreenController
{
    protected Dictionary<string, S> screenControllers;
    
    public abstract void ShowScreen (S screen);
    public abstract void ShowScreen<P> (S screen, P properties) where P : IScreenProperties;
    public abstract void HideScreen (S screen);

    [...]
}

Every Screen can have an optional Properties parameter. This was used to pass a data payload to the Screen. For the Screens that don’t really require a payload, there’s a parameterless version as well (implemented extending the version using a default Properties class as parameter). The properties are also [System.Serializable] classes, which means you can define some (or all) of their properties directly on the Prefab in Editor time.  The lifetime of the Screen begins with a registration step, where the prefab reference is passed to the Layer, then it’s instanced and it’s registered bound to its ScreenID (it’s basically a Dictionary<string,T>).

Here’s a crop of the AUIScreenController:

public abstract class AUIScreenController<T> : MonoBehaviour, IUIScreenController where T : IScreenProperties
{
    [Header("Screen Animations")]
    [SerializeField]
    private ATransitionComponent animIn;

    [SerializeField]
    private ATransitionComponent animOut;

    [Header("Screen properties")]
    [SerializeField]
    protected T properties;

    public bool IsVisible { get; private set; }
    public string ScreenId { get; set; }

    [...]

    protected abstract void OnPropertiesSet();
    public void Show(IScreenProperties properties = null) 
    { 
        [...] 
    }

    [...]
}

Remember I mentioned the idea that whoever implements the UI doesn’t need to worry about anything? OnPropertiesSet is the only method that needs to be implemented for the a ScreenController. Internally, we make sure that at the point where it’s called, the Properties are for sure set, which means you can freely use the payload.

I wish I could say that the generics-fest internally made everything type safe automatically, but I do have to admit there’s a single cast in there: since the Show receives an interface and the Properties in the class are <T>, I had to up then downcast the parameter:

Properties = (T)(object)properties;

I’m pretty sure I could avoid this giving it a bit more thought, but I didn’t want to spend too much more time in this back then.

That leaves the Manager itself: it simply receives method calls (or responds to messages, if you want to keep the code free of cross-module method calls) and routes them to the proper Layer. I usually also add some handy shortcuts to important UI specific things that you’ll most likely need to use, like the UICamera or the MainCanvas of the UI. This means no code should ever communicate directly with the Screen or Layer code, it simply says “Hey, I need ScreenID XXX opened with this payload” or “close ScreenID YYY“, and the UIManager is your access point to all that. Coming back to “being a simple façade” thing, the biggest method in there is the one that initializes the system, and it’s less than 20 lines long (and actually pretty verbose).

Animation and flow control

Some people love Animator to death, and they’re right. It’s an incredibly powerful tool, especially if you’re a humanoid character. Which, last time I checked, not really the case for UI. I’m rarely a control freak over things, but when it comes to UI flow, I might be one, and the idea of having animation controlling THE FLOW of the UI gives me Vietnam flashbacks. The legacy Animation component actually worked pretty well for UI, but even if it doesn’t look like it’s going anywhere anytime soon, I avoid using anything that is considered legacy in production.

That said, if you have very simple UI, piggybacking on the Animator state machines can be a good idea. But as soon as you cross a certain complexity threshold (which to me, again, is very, very low), things can quickly go awry. All that said, it’s not just about being conservative regarding the workflow and asset count: there’s also considerable performance issues linked to Animator and Canvas updating. I didn’t really know this beforehand, so it was good to be gifted an extra argument instead of simply having the gut feeling it just didn’t work well enough workflow-wise.

Juicy UI needs a lot of animations on Screens going in or out, and during those, it’s really easy to create some weird soft lock conditions if the user (or especially, QA) decides to tap a button. In the Screen code, there are built in options for animating in and animating out, and the interaction is blocked while there’s an animation happening. You can either do it by fancily controlling your EventSystem, or do it like I did, and just slap a transparent fullscreen Rect on top of everything else to block raycasts.

On the code side, there’s an ATransitionComponent:

public abstract class ATransitionComponent : MonoBehaviour
{
    /// <summary>
    /// Animate the specified target transform and execute CallWhenFinished when the animation is done.
    /// </summary>
    /// <param name="target">Target transform.</param>
    /// <param name="callWhenFinished">Delegate to be called when animation is finished.</param>
    public abstract void Animate(Transform target, Action callWhenFinished);
}

These can be set up in “transition in” or “transition out” fields on any AUIScreenController. In the AUIScreenController, there’s 2 events that the Layer registers to and warn when animations are over. This allows having the code flow closer to what you see regarding timing and things like blocking the screen for interactions: whenever a Dialog opens or closes, it triggers the UI blocking, animates, and when the animation is over, the embargo is lifted. Because of this, we never had a single bug report by misuse during transitions. If there’s no animations configured, the GameObject is simply activated or deactivated. Internally it’s a bit more involved, but this is a simplified diagram of the process:

 

The good thing about this is that you can easily build several different transition types and they can be rigged directly by the artists. We had things like sliding from a direction, fading, and even some fancy foldouts all controlled by DOTween. That said, for some Screens you probably want some flashy, awesomely complex animated stuff. To enable that, we simply cooked up an AScreenTransition that works together with Animator, which means the UI artists could use animations if they wanted to.

My biggest beef is that even for very, very simple animations you’ll still need an Animator Controller (or AC override) and special sets of animations: as Unity’s animation system depends on hierarchy and names, you either have to do something smart with your hierarchy to make it possible for multiple screens to use the same animations, or you’ll end up with a boatload of assets to simply make a screen slide in and out. Fortunately, there’s now the SimpleAnimationComponent in which you won’t need an animator controller and can use an API more similar the legacy system (haven’t tried it though).

At the end of the day, UX-wise, you’ll probably have a grammar for transitions, which means that you’ll be able to make do with just a handful of types. It also means that on a lot of cases you can pull it off with simple parametric tweens, and you won’t need to even touch an Animator. Design your systems for the default cases and make it possible for exceptions, not the other way around; if everything is special, nothing is special; if your UI is all the time super flashy and noisy for every single element, it might just be distracting and hard to read.

Also regarding animations, I ended up making this little extension component for buttons that is based on ScriptableObjects for configuration. This allowed us to batch change all the buttons of a given type without having to re-serialize all the prefabs in which they exist. This means a button simply needed to reference a given SO, and then it would automatically animate and play sounds in a given way, and if that specific kind of button ever changed those behaviours, we simply had to change in one spot. In theory, you could do a scheme based on that for types of screens and in-and-out animations as well.


Afterword

In retrospect, the UI system didn’t go through any major changes at any point in production, and it permitted very quickly building everything up. The focus on being agnostic paid off, and when we had to prototype a new feature, it was easy to make quick and dirty code in a very localized way and clean it up afterwards.

Legends used StrangeIOC, and I honestly didn’t see any major advantages in it – it was mostly boilerplate. On BRC, we used a single class for controlling screens and it was enough for what we needed, with focus on making reusable widgets whenever possible. Some screens had all the data prepared outside and passed via Properties, but some screens just accessed the data directly. We used a lightweight global messaging system which I’ll also write about (actually inspired by StrangeIOC’s Signals), and that helped us achieve pretty much total decoupling regarding UI – things would happen in the game and, if there was some UI element present that cared about that, it would update itself, otherwise, nothing would happen.

While it was a pretty long post, it feels weirdly nonspecific. I unfortunately can’t share the code itself as the latest version was created at work, but I’m currently rebuilding it from scratch at home and can maybe do so in the future. That said, I hope it contains some helpful pointers and ideas. If you think I totally missed the mark at any point or if you have any specific questions, feel free to add a comment or hit me up on twitter.

20 thoughts on “A UI System Architecture and Workflow for Unity

  1. As an experienced programmer that’s toying with the thought of learning Unity and make some simple games, this post was both very useful and very scary: I always had the impression Unity was full of things that were non-obvious and easy to get wrong, and you’ve mostly confirmed my fears… :-/
    Well, at least now I have some ideas about what to try to avoid and what to try do achieve! 😀

    1. Hey Roberto,

      Honestly, I think that shouldn’t worry you at all, especially in simpler projects. All the info about the nitty gritty stuff is out there, the problem is that it’s not super localized – which I understand as they probably don’t want to overburden the documentation. But the community is huge and it’s easy enough to find answers. A lot of the things I mentioned are also about optimization, so not something you have to worry about unless you’re on mobile platforms for example.

      I’m obviously biased, but the setup time to get something up and running in Unity is really, really fast, so is iterating. The amount of stuff you get out of the box and not having to worry at all about things besides making your game makes those non-obvious things specs of dust you won’t see unless you’re really looking for them – just like the real specs of dust, they are only really a bother if they get stuck in your eye 😀

      Thanks for reading and good luck on your projects!

  2. Thanks for this article, it is rare to find something on UI architecture 🙂
    – How the UIManager knows if it is a panel or dialog ? and on which layer it should be with parenting. Some arbitrary decision on screen ID ?
    – For example, can you show us how your example screen would initialize with every parts?
    Still waiting for your github example 😛

    1. Hey, thanks for reading!

      Panels and Dialogs implement different interfaces, so I simply check their type to decide which Layer they should be submitted to. The registration process is more or less like this:

      There’s a call from somewhere to register a screen
      The Manager checks the type of the screen and calls the registration method in the proper Layer
      The Layer checks if there’s a screen with that ID. If it’s not there, it processes the new screen (setting its id up, adding it to the screen dictionary etc)
      The specific Layer implementations do their type-specific initializations (eg: reparenting, hooking up to animation and close request events)

      In Blood Runs Cold’s case, we had a scriptable object that had the names of all the assets for our screens (we used the asset name as an unique id both on the asset bundles and as the screenid). During loading, we’d asynchronously load the screens from asset bundles and, when they were downloaded, we’d register them with the UI Manager. If you can just deploy everything together, you could just as much simply add all the screens to the UI object and iterate through the children registering them.

      The screen itself is not fully initialized until it’s opened. Every screen has an OnPropertiesSet() method that needs to be implemented if you want non-static data to be displayed (either by fetching it on the screen code itself, or by getting it passed from the Open() call). The system makes sure it’s only called when all necessary initializations are done.

      Github might still take a while, but it’ll happen. Eventually 😀

  3. Thanks for the explanation!
    Yeah I wanted to use Scriptables also to be able to add other screens later and with some configuration data (id, layer, anim in/out).

    Last question…what’s your approach to know what to show and when?
    On your example, let’s say the next screen must hide the top-bar, and just after you have to show it again. Is it all controlled inside ScreenController? e.g:
    – Screen 1 (level-select) : UIManager.Show(“top-bar”);
    – Screen 2 : UIManager.Hide(“top-bar”);
    – Screen 3 : UIManager.Show(“top-bar”);
    With 1-2 panels it’s OK but with 10+ panels, it’s unmaintainable 😉 (or others solutions, configuration/screens)
    I’m surely over-thinking but still unclear for me with your architecture 😉

    1. I think that depends on the kind of overarching game/UI you’re building, but you usually want to control as much as you can outside of UI code. Screens should be able to open any other screen, but only be able to close themselves. However, the opening/closing process was made to be completely headless because I wanted the UI system to be highly agnostic (as I prepared it to be used by other projects, with different needs/architectures), so it allows you to open or close from anywhere in your codebase.

      But since we had a stack for history and an opening queue, we didn’t have issues of that kind. For Dialogs closing always happened when the dialog itself requested it (because of user input) or when we closed the whole UI (when changing scenes, for example). For Panels, the opening and closing were tied to some overarching game states (eg: a hud on a given game mode).

      The idea of dividing into Dialogs and Panels helps with your specific example: if we have something like a top bar that is present across states, it will most likely be an “always open” panel. In case there’s something more prioritary in a given UI state, we kept it open but on top of the rest, which means we don’t need to close what’s behind it – you’re at a modal interaction point after all (we did have the option of hiding or not Dialogs when they lost focus).

      If you check this video, 5:50 onwards, it might be easier to understand:

      The HUD has a word bar, timer, bonus bar and hint button on the bottom right corner. Each one of those is a different panel. Which panels get open depends on the game mode you’re playing, so there’s an UI config for each game mode that lists the relevant panels, and the game mode simply requests to open all of them when it starts, and then to close them when it ends.

      At 6:40, when the player finishes the level, a Dialog with your quest details appears. You can interact with it to collect your coins for completing the quest. Notice that on the top, we have a bar showing your money. That bar is the Resources Panel.

      When the coins are collected, the Quest Dialog closes, you get a popup and, when you interact with it, you go back to the playable chapter. Notice that throughout this time, the Resources Panel never goes anywhere, and we don’t need to show/hide it because of the priority of our screens (eg: the Resources bar is always on top, except when there’s a popup that blocks the whole screen).

      Code-wise, the Quest Details Dialog knows that when you collect a quest, it should close itself and go back to the quest list. When there are no more quests, there’s an even that triggers the popup to warn the player they can go back to the chapter to continue playing. When that button is pressed, we fire another event telling the game to navigate to the scene selection, which is responded by the navigation bar, that is responsible for both user input and reacting to navigation requests. So since we’re constantly showing/hiding things based on certain states, the code kind of simply “takes care of itself” based on the UI flow.

      At the end of the day, you’ll usually have a “grammar” for your UX, so a navigation that makes sense comes with not that many code worries out of the box. So in our case, we either made things state based (eg: when you change game modes, the game modes know to open and close certain panels) or delegated control to some sub-module (eg: the navigation panel, that centralizes all the control for going between the main dialogs of the game). And on the case of the game reacting to things, that’s usually delivered by a popup which, since it’s always an overlay, can be triggered by anything in the code, as it will block the rest of the interactions and the history will take care of the cleanup once it’s closed.

      One important thing is that in our flow, we never had any loops, which probably made things less bug prone. But that’s something that anyway I would recommend avoiding as much as you can, both for code and UX purposes.

  4. Really good detailed answer! It makes things clearer in my head 🙂 And I can’t agree more on not using loops.

    I think I’m gonna take some of your ideas and adapt it to my game.

    Thanks a lot for the answer and keep up good work!

  5. Hi Yanko.
    Awesome post.
    I have some questions. First, sorry my poor English. I m Brazilian like you, but Im not so good in English as you.

    I already work in 5 projects with the responsibility to write de ui system code, but I couldn’t create a generic one because the UI artists always created some situations that I haven’t antecipated.

    In this moment I’m studying finally code the UI system that will be used by the following projects. That post is a gem for me. But I don’t know if that resolve some common problem, I guess I don’t understand.

    Situation:
    Some dialog that has a internal flow. Ex:
    Reward dialog that open with scale transition and has a button to open the box. After user click in it, inside the dialog, a fade transition make the first screen fade out and a new screen fade in with the reward and a button “continue” or “close”.

    I think the code flow is:
    UiManager.Open(“Reward dialog”)
    With the open box button click:
    UiManager.Open(“Reward Review content”)
    With continue button clicked:
    Uimanager.Close(“Reward dialog”)

    Mind that I not closed the Reward Review, and I must to reset this dialog to open it again in a correct state.

    I guess that can be made making the whole internal states a completely separated flow. Not using Screens or Layers or UiManager. Or you think i can create a layer inside screen or other layers.

    Other Situation:
    Tabs. Screen that have tabs and transition the inside content to left or right to close the atual one and open the selected.

    I think the problem here is that the screen can transition in to left or right (the same with out transition). I must be able to change the transition in runtime. You know how to solve this?

    I wish you understand me.

    Thanks for your post. When I finish my system I want to put it in a public repo. (I guess my studio will be happy to help other developers and that developers can improve the code and help us. )

    I wish success to you and all Brazilian developers.

    1. (Ok, I just wrote a HUGE reply to you, but I accidentally clicked a link and lost it. Yay! Will try to sum it up.)

      First things first: your English is just fine! I’ll answer in English as well for anyone who pops by (mas se precisar pode mandar em PTBR que nóis entende 🙂

      As usual, there’s no single answer. I have never thought about layers within layers, but I think that would most likely be overkill in any scenario. What you have to think about is reusability: is this screen/popup used in several different UI flows, ie, coming from several different screens? If so, you most certainly want it to be a separate Screen and to go through the UI system. If it’s tightly coupled with a specific screen, then you can simply put everything into that screen’s control and have a bunch of internal states. Remember you can always make several “widget” components to split the code (say, for different tab contents) and not have this huge screen controller class that has references to everything.

      If you use an approach similar to ours, where animations are controlled by components you reference to, nothing stops you from having the parameters in the animations completely dynamic and change them at runtime. However, be careful not to create coupling: if you have one view altering the animation component of some other view, you’ll probably shoot yourself in the foot.

      In Blood Runs Cold specifically, all the navigation was controlled by our Navigation Bar code: it routed you to different Dialogs when you clicked, and it reacted to calls to navigate. Visually, all our main screens slid as a single horizontal block, but we just piggybacked on the regular UI system: if one screen was animating to the left when going out, and the other was animating at the same time when going in, so they looked like a contiguous “object” in the screen.

      For that kind of navigation, however, we did cheat a bit: there were static members in the navigation bar code that defined the direction things should go when going in and out. We then had a “navigation dependent animation component” that would be the animation component added to the main Dialogs in the game, and they would point to that static info. It seems kinda hacky, but the simplicity kept the code completely decoupled: we had dependencies only between a 2-line animation component that was specifically made as an extension to the navigation bar, and the screens had no idea what was going on; all the code was centralized in the navigation bar code.

      In our approach, at the end of the day, the screens don’t have any idea of the context, and besides the history, neither does the UI system. That means they’re fully decoupled, and any context dependency is externalized to something else (eg: to the navigation bar code). Things that were truly dependent on context, in our case, ended up having their internal states all handled by different components, controlled by Screen code itself, without going through the UI system. In our case, there was an advantage of going through the UI system: we always called the “setup” code for a screen when it gained focus, even if it was still visible (eg: you open screen A, the setup code runs. Then popup B shows up and A stays open in the background. Then you close B. This brings A back into focus, and the setup code runs again). This meant that any housekeeping is always done when the UI system calls the “OnPropertiesSet” method, so you don’t have to manually request it. However, if it’s simpler to manually do it when handling internal state code, that’s what you should go for.

      I think in your case for the tabs, unless the tab contents are widely different and complex, you’ll probably be able to go for the sub-components controlled by a single Screen controller route. In our case, we had a “tab controller” component that simply created a bunch of buttons and switched a bunch of objects on and off when you clicked the buttons. Regarding your reward animations, unless any of the reward “sub-screens” can be reached via a different route in the UI navigation, you can probably just delegate it to the screen code itself (ie: what you said, just control the states internally without going through the UI system).

      With this kind of approach, you can control the “location” where your logic happens in your code and centralize your context. One interesting example is our error/warning message popup: it could be fired up at any part of the code, but it always contained a text and ok and/or cancel buttons. The screen itself just received the string and 2 callbacks as a payload – so the code that handled what would happen isn’t on the error screen code, but in the callbacks written in the place that spawned that popup.

      Hope I’ve answered your questions and good luck with the projects!

  6. Hi, Yanko!

    Glad to talk to you once again! I have a little issue. I think that all Screens that are interactable shoud be Dialogs, and all the non-interactable stuff should be a Panel. What would you recommend?

    Thanks a lot!

    1. Hey Jorge,

      Sorry, with the move and all your message got lost in the pipe. I wouldn’t say it’s necessarily panels for non-interactable things, and dialogs for interactable: if you look at the example in the post, the navigation panel is fully interactable, and it’s not a dialog. The system itself shouldn’t really care about that. If your UX language has that rationale (ie: everything that is interactable is a modal), then it’s a rule you could follow to decide what is what. But to me, the clearer (but still flexible) rule is: if it’s supposed to coexist front and center with other elements, it’s a panel. If it’s something that will encapsulate most or all of what the user is doing at a given point, it’s probably a Dialog.

  7. Hi Yanko,

    I hate to do this, given the huge amount of amazing information you’ve shared with everyone here, but I have a problem with the declaration of the RequestScreenBlock and RequestScreenUnblock in the WindowUILayer class. I’m getting a “event must be of a delegate type” error on these two declarations, and I’m a bit too new at programming and C# to be able to understand this.

    I’ve tried mucking around with changing the declarations from event Actions to Actions, delegates, and various combinations but at this point I’m just throwing garbage at a wall and hoping something sticks, and I clearly don’t understand the underlying problem.. Any guided help on this would be much appreciated.

    1. Hey there, Adam
      I was out of town for a while and it took me a while to check the comments here. It’s interesting that you weren’t the only one to have that issue (Dslyexic below had the same).

      It seems like he fixed it by explicitly declaring things as System.Action.
      I’m not entirely sure why exactly this could be – I wonder if there’s some namespace issue.

      Can you give me a bit more info? What Unity version were you using? Did you import the code into your own project with pre-existing code, or did you download the whole project and tried to open it?

  8. Hi Yanko,

    Sorry to have to resort to this as you’ve already given so much great information here.. but I’m having an issue with the UIFramework on github, that I don’t quite understand.

    In the WindowUILayer class, I’m getting an error for the RequestScreenBlock and Unblock Actions, saying that “event must be of a delegate type”. As far as I can tell/find online, an Action IS a delegate type, so I don’t know what the fuss is about. Any ideas?

  9. Yanko,

    In case these comment submissions are being submitted and must be approved by yourself first, please disregard my previous questions. Or perhaps if they’re simply queued up and their posting is inevitable, this will clear up the issue a bit.

    Regarding the “… is not a delegate type” error: For whatever reason, the example project works fine, while importing the UIFramework core into my project does not. I realized by comparing the two files that nothing is different, except for some reason my project is saying that the System namespace is unnecessary, and thus the Action type that was being used was not actually System.Action. No idea where it came from, or why using the namespace doesn’t work, but specifying System.Action does… I just went ahead and replaced all references to Action with System.Action and the problem went away..

    Sorry to bother, and no need to respond!

    1. Hey there, sorry for taking a while, I was out of town and things piled up 🙂

      I can’t really think of any reason for that – I wonder if there’s some namespace issue. Seems Adam in the comment above had the same issue; I published your comments to anyone who might drop by.

      I imagine you were opening both the raw project and your project in the same Unity version, right? What was it, btw?
      Were you using any other external packages that maybe have something called “Action”?

      I’ll try to find what could be happening, but if I don’t or if I take a while, I’m glad you have a workaround 😀

  10. Hello. Thank you a lot for sharing your great solution for UI development!
    May I ask what solution for development logic part of games (combining states, scenes, ads\statistics plugins, network together) are you using?

    1. Hi Aleksey,

      I haven’t had to worry too much about that for a while, but I usually try to use whatever Unity has natively (for ads, analytics etc). For scene management, I always end up doing something per-game – it tends to be really simple, really.
      Network is one of the things you’ll usually want to have an abstraction layer, so you can swap different network implementations in and out based on your needs.

      For mid-sized Unity games, I usually go for a service locator pattern (essentially, a prettier Singleton manager kind of thing) + signals being fired around to control state flow. Being totally honest, this can get out of hand very quickly if you don’t know what you’re doing, and some people hate it 😀
      But it does strike a decent balance between powerful and flexible, if done right, IMHO.

  11. Hello,
    The framework you provide fits perfectly what I want to achieve and I thank you for that.
    Moreover I would like to know your opininon about keeping track of a “global” application state.

    I already studied the possibility of having an FSM as a state controller but I feel like it would overlap the role of a UIController simply subscribing to some kind of game events and hiding / showing UI accordingly. Eventually delegating some specific behaviour to other classes.

    So does it make sense to use such state on a mobile game or just let the UIController drive it ?

    Have a great day !

    1. Hi Florian,

      Sorry for the delayed response, haven’t checked wordpress in a while! I imagine by this point your game might even be finished 😀

      That’s an interesting question. I think it will vary a lot depending on your game’s complexity. In BRC, there were 2 layers of UI state control: one of them was the actual game state (you’d either be “in game” or “in main menu”, and the sets of UI we’d show were different), and the other was entirely driven by the UI navigation – i.e.: after we showed the initial window for that game state, the UI state was purely “virtual”; there wasn’t really a place where we’d store “you’re at screen X” other than the UI Manager’s “CurrentWindow” – but that’s mostly because the state the UI was in was irrelevant to the code and control was pretty tight.

      This was possible mostly because we had a very simple graph, and we really just needed these 2 logical states (in game, in main menu). If you have a game with greater complexity and you’re more comfortable having a full FSM of game state being controlled by code, I guess it wouldn’t hurt adding UI opening/closing being controlled by state transitions and making those states be very granular. The problem comes from having to sync your game states with UI transition animations: do you yield transitioning the game state until the animation is finished? How do you deal with having a mismatch between what window your UI thinks is open and the state the game should be at? Also, it’s always worth noting that maintaining FSMs purely in code are kind of a hassle (that’s why you have so many visual/data driven solutions out there for FSMs).

      I’ve managed to dodge those problems in the games I’ve used this structure for by trying to simplify the game’s state graph as much as possible, and letting the UI take care of itself – so answering the question “is it doable to simply have the UI navigation serve as state?” specifically, I’d say the answer is yes – the less dependent your UI state is from game state, the easier it is.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s