IcedCoffee is an attempt towards a clean, minimalistic OpenGL based user interface framework written in Objective-C. It borrows some ideas and code snippets from cocos2d and cocos3d, but approaches stuff in a slightly different way. IcedCoffee is thought to be a basis for accelerated user interfaces – in games, apps or whatever you wish. Its current highlights are: shader based picking, a perspective projected UI, render to texture (with depth buffers, with picking support, even when nested), a basic view hierarchy (wip), and a minimalistic scene graph employing the visitor pattern to keep things clean.
IcedCoffee is in a really early stage. It’s not feature complete and some things won’t work, but I would really appreciate feedback on the core concept. A sneak preview can be grabbed here: IcedCoffee Framework on GitHub.
Warning: in what follows I wrote down the long story trying to justify why I decided to create another UI framework – in the end it boils down to “passion” and doesn’t tell you anything you couldn’t learn from IcedCoffee’s readme, but I’ve you are still interested, voilà:
Long Version: Why Another Framework?
I have always been in love with user interfaces. Ever since I had a computer with a graphical UI (I think that’s roughly 20 years ago) UI engineering and design became one of the things I was and still am really interested in. I have used and extended many UI frameworks, beginning with Visual Basic’s Form Editor, then C and the Win32 API (putting together dialogs and runloops manually was big fun, but I don’t miss it), continuing with the Microsoft Foundation Classes (MFC), then Qt for multi-platform UIs and finally arriving at Cocoa/CocoaTouch.
Back in the old days, I started many attempts to put together an own framework. My first try was named “VelvetUI”, a 2D user interface with some 3D capabilities built upon Microsoft’s DirectX. Velvet ended up being a polymorphic C++/XML monster, slow and bloated – I didn’t even want to use it myself after a while. I always had a talent for coding, but Velvet was the first time I over-engineered a project to death, endangered my ex-relationship by spending too much time on coding and gaining a hell lot of experience on what NOT to do when approaching a big engineering task.
However, there were a few nice things in Velvet, which I still liked, so I built “tlib”, Velvet’s sequil. tlib drove my Bachelor’s thesis. It was kind of Velvet light, and it worked. By the way, you can still download it, if you don’t know what to do with your time and want to dig a bit in the old codes: this is tlib (and my Bachelor’s thesis). When I finally started studying for my Master’s degree in 2006, I abandoned Microsoft and went on to using a Mac. tlib was history and I began using Carbon, then Cocoa, which I still love, then programming OpenGL.
Filling A Gap?
Whenever it came to OpenGL, pretty much everyone I know brew their own soup. If you implement a game you use OpenGL (or DirectX, or both) and probably either use an existing game engine or write your own. If you just want to implement a flashy GUI that goes beyond the standard set of possibilities you have with vendor-specific UI frameworks like MFC, WPF, Cocoa, gnome or whatever, you probably implement it on your own. If you are on the Mac or iOS, you may use cocos2d or similar frameworks to speed up development a bit. All of these are not really focussed on GUIs, but more on games. I once tried to create an editor for a 2D map with 3D objects on it, quite simple, but complex enough to make it really hard with cocos2d.
Finally, I decided to give it another try.
So, What’s In It?
I love Objective-C, so I decided to write the framework using that. I believe Objective-C paired with Apple’s Foundation framework constitute a quite solid basis for all the stuff you don’t want to think about too much: runloops, inter-thread communication, messaging, containers, strings, encodings, and so on.
What I like about cocos2d is the way they handle drawing, threading and OpenGL contexts. So I borrowed some pieces of code and some core principles from there. IcedCoffee performs drawing and event handling on a display link thread. The display link ensures frame updates are performed in sync with display refreshes. It is a courtesy of the CoreVideo framework on OS X and CoreAnimation on iOS. IcedCoffee re-routes events to that thread just like cocos does so as to ensure no inter-thread synchronization is required. Nearly all IcedCoffee resources are non-atomic by this means. However, IcedCoffee still allows asynchronous operations such as texture loading via GCD in a distinct OpenGL context (which has been inspired by cocos2d as well).
What I dislike about cocos2d is the way they rely on singletons. Cocos2d is full of singletons. Now I don’t want to say that singletons are a bad thing per se, but my intuition and experience tells me that they tend to cause trouble later on. So I decided to make IcedCoffee virtually singleton-free. Well, nearly, IcedCoffee has a few singletons, but it uses these only for stuff that really is “global” and “single” (globally) – I guess you know what I mean.
What I also dislike about cocos2d is the whole CCLayer stuff and the way user interaction and hit testing is performed. I personally don’t want to fiddle around with rects, priorities and so on. So I decided that a core component of IcedCoffee would be its ability to do color-based picking using shaders. This should work on the OpenGL view’s frame buffer and also on render texture FBOs, even when nested. It was a hell of a work, and it now does work, so I hope you like it as much as I do. As I liked the way Cocoa treats event processing with its responder chain, I decided to incorporate that into the framework and connect it with the picking. As a result, each node can receive mouse or touch events when someone clicks or taps on it.
What I like about cocos3d is the way they do visitation. The visitor pattern is really powerful when it comes to scene graphs, so I decided to implement visitors in IcedCoffee. The IcedCoffee framework ships with two default visitors: one for drawing and another one for picking. The drawing visitor is quite straight-forward, but the picking visitor does real magic. It allows for picking with nearly all custom nodes you could write up, you don’t need to care about it in 99% of the cases. It works with exactly the same efficiency as the drawing visitor does. So if you write up some kind of visibility detection and incorporate that into the visitors, your picking algorithm will automatically be as fast as your drawing algorithm – quite nice.
Another quite unique thing about IcedCoffee is its user interface camera. IcedCoffee uses a perspective projection that projects points on the world’s XY plane to pixels on the frame buffers and thus allows for flashy effects and (nearly) implicit retina display support.
IcedCoffee is still very basic. It does not feature any high level user interface controls. I am currently working on a control abstraction and some basic controls. Another focus for the next couple of weeks is font rendering and update scheduling. If you are interested, please have a look at IcedCoffee’s test cases and the code and add a comment to this post. Or send me an email. I really appreciate it!