Here’s the story of the hurdles we had to overcome. Let me start with this first, the lack of access to the DirectX device was extremely annoying. So, now that we’ve got that clear, we just started our integration… and immediately ran into a problem.
There was no easy way to create an empty texture with specified size. Let alone create a shared texture for Windows Vista/7. This presented the first small inefficiency in the integration, since we had to use shared memory as image transport mechanism from Coherent UI to CryEngine 3, which involves some memory copying (and using shared textures doesn’t). Even access to the device wouldn’t help that much here since we’d like to have a valid engine handle for the texture, which means that we can’t bypass it. After some experimenting, we settled on creating a dummy material with the editor, assigning a placeholder diffuse texture and getting its ID with the following code:
Having the ID, we can update the texture at runtime using IRenderer::UpdateTextureInVideoMemory. This approach comes with its own set of problems, however. An obvious one is that you need unique dummy material and diffuse texture for each Coherent UI View, which is annoying. Another problem is that this texture is not resizable, so we had an array of textures with common sizes that were the only ones allowed when resizing a View. The least obvious problem was that if the material’s texture had a mip-chain, IRenderer::UpdateTextureInVideoMemory did not automatically generate the lower mip levels which resulted in some strange results, because of the trilinear filtering. It didn’t perform any kind of stretching either, and that’s why we only allowed preset View resolutions. You can see the mips problem here:
Problematic trilinear texture filtering
The placeholder texture
It took some time for figuring out since, at first, we didn’t have fancy placeholder textures, but only solid color ones. The solution was to simply assign a texture that had only one surface (i.e. no mips). This presented another small inefficiency.
Ok, we have a texture now, we can update it and all, but how do we draw it on top of everything so it acts as a UI? After some digging in the SDK, we found that the CBitmapUI(and more precisely, IGameFramwork’s IUIDraw) class should be able to solve this, having various methods for drawing full-screen quads. The color and alpha channel weights were messed up, however, so we had to callIUIDraw::DrawImage beforehand, which had the weights as parameters, so we could reset them to 1.0. We just drew a dummy image outside the viewport to reset these values, having yet another small inefficiency.
Moving on, to the biggest inefficiency of all – Coherent UI provides color values with premultiplied alpha. This means that transparency is already taken into account. When drawing the fullscreen quad, the blending modes in CryEngine are set to SourceAlpha/1-SourceAlpha for the source and destination colors, respectively, meaning that the source alpha will be taken into account again. What we had to is “post-divide” the alpha value, so when DirectX multiplies is we get the correct result. We had to do this for each pixel, involving both bitwise and floating point operations – imagine the slowdown for doing that on a 1280×720 or even 1920×1080 image. If we had device access, all that would be fixed with a single call for the blend mode but, alas, we don’t. Also, if we used DirectX11 renderer, we’d have to do another pass on the pixels to swap their red and blue channels, because the component ordering has been changed since DirectX 10!
Next on the list was input forwarding – we wanted to add means for stopping player input(so we don’t walk or lean or anything while typing) and redirecting it to Coherent UI, so we could interact with the Views. This wasn’t really a problem but it was rather tedious – we had to register our own IInputEventListener that forwards input events to the focused View, if any. The tedious part was creating gigantic mappings for CryEngine 3 to Coherent UI event conversion. Stopping player input when interacting with a View was easy, too – we just had to disable the “player” action map using the IActionMapManage. We also needed a free cursor while ingame, so you can move your mouse when browsing, which was just a matter of calling the Windows API ShowCursor.
The final problem was actually getting the texture coordinates of the projected mouse position onto the surface below. I tried using the physics system which provided some sort of raycasts that I got working, but i couldn’t do a refined trace on the actual geometry nor obtain it to do the tracing myself. And even if I managed to do that, I couldn’t find any way to get the texture coordinates using the free CryEngine 3 SDK. That’s why I just exported the interesting objects to .obj files using the CryEngine Editor, put the geometry into a KD-Tree and did the raycasting myself after all. For correct results, first we’d have to trace using the physics system so we know that no object is obstructing the View. Then, we trace in the KD-Tree and get the texture coordinates, which can be translated to View coordinates.
In conclusion, it worked pretty well, although if Crytek gave us access to the rendering device we could have been much more efficient in the integration, but then again, we used the free version so that’s what we get. I was thinking of ways to get the device, like scanning the memory around gEnv->pRenderer
ing the first 3 virtual table entries) and then querying the interface for a D3D device, or just making a proxy DLL that exports the same functions as d3d9/11.dll
and installing hooks on the relevant calls, but I don’t have time for such fun now.
Now that we’ve seen how far can we go using the free CryEngine 3 SDK, next on the agenda is full Unity 3D
integration (we have device access there!). Be on the lookout for it next month!