The vast seas of games like Assassin's Creed and Subnautica, the expansive battles in Battlefield and Call of Duty, and the captivating fire magic in Skyrim and The Witcher.
If you're a gamer and game dev enthusiast, you surely remember those scenes and effects.
Is that the same thing in the games as we see in real life? Hardly. Even the games we truly love and get inspired by have not much in common with realistic water and fire. It's not rare when VFXs look good but lack interactivity, so they aren't really immersive.
Consider Call of Duty: Modern Warfare, where developers employed a new particle-based fire simulation for more lifelike, dynamic fire effects. This impressive feat, achieved through various particle types to mimic flames, smoke, and sparks, came at a steep cost of tens of millions of USD.
However, the cost of creating authentic liquid, smoke, and fire effects is no longer as high, thanks to the Zibra Effects toolkit.
The journey of game development is fraught with challenges, not least in achieving realistic fire, smoke, and liquid.
Traditional methods for replicating fluid dynamics, including particle systems and shader effects, are visually striking but demand substantial hardware resources. Fluid simulations, while accurate, necessitate wasting a ton of time to write specialized algorithms and require a lot of computational power.
Achieving realistic natural elements like fire and water is a complex task that requires a careful mix of advanced technology and artistic skill. Fire, for example, needs to be more than visually impressive; it must also exhibit authentic behavior, reacting to the game's environment and player actions, which involves intricate simulations and significant computational efforts.
Water realism, similarly, presents its own set of challenges. Techniques like real-time fluid simulation are resource-intensive, while baked simulations lack interactivity and realism. Striking a balance between performance cost and realism required both technical and artistic expertise and a large amount of time.
Zibra Effects offers a comprehensive suite of solutions to streamline the game development process.
It simplifies integrating advanced liquid, smoke, and fire effects into virtual environments, catering to a wide range of platforms and covering a variety of use cases such as PC games, cinematics, and training simulators.
This ecosystem includes several key products: Zibra Liquid, Zibra Smoke & Fire, and Zibra Effects.
Creators can also benefit from a complementary Assets Library to fast-track the creative process with a ready-made VFX that can be used right out of the box.
Zibra Liquid is an extension for real-time 3D liquid simulation. Born from the synergy of a powerful physics solver and unique object representation technology, it's equipped with advanced features that allow you to create high-quality, performant, small-to-midsize 3D liquid simulations.
This tool empowers creators to enhance environmental visuals with lifelike VFX, such as flowing rivers, serene ponds, and dynamic lava streams. It also enables the integration of interactive gameplay elements, leveraging realistic liquid physics to design unique puzzles, obstacles, and more.
Zibra Liquid also introduces a new dimension to visual storytelling, enabling the creation of dynamic gameplay components through real-time simulated liquids. It's ideal for crafting imaginative concepts like elemental entities, shape-shifters, and other creative designs.
Zibra Liquid feature set includes:
Read more on features and specifications on the Unity Asset Store.
Zibra Smoke & Fire, utilizing custom physics simulation and the object representation technology applied in Zibra Liquid, enables the creation of real-time, physically accurate smoke and fire effects, along with other functionalities.
This solution includes a multigrid pressure solver and a fully custom volumetric rendering system for Unity. While its core functionality is akin to that of Zibra Liquid, Zibra Smoke & Fire introduces several new features.
One notable addition is an advanced lighting system, which allows for realistic integration of effects into scenes, ensuring they are lit correctly to fit the environment.
With Zibra Smoke & Fire, it’s possible to illuminate environments with real-time simulated fire, like torches that light up walls, or use the light in the scene to enhance the effect, making it more appealing.
Plugin functionality enables you to use smoke and fire simulation to improve the interactivity of your project with custom game mechanics, such as combat effects, magic effects, puzzles based on smoke control, and more, or you can create immersive VFX of burning fires, smoke trails, and weather effects, such as clouds, dust, debris, and fog, etc.
Both plugins provide access to an API that lets you control the simulation and query data. Using this API, you can create new game mechanics, animate component parameters, programmatically change rendering quality, and many other things.
Check out the Zibra Smoke & Fire feature list below:
Read more about Zibra Smoke & Fire on the Unity Asset Store.
Zibra Liquid and Zibra Smoke & Fire are designed to be as user-friendly as possible. Their core functionality is covered in the User Guide.
We've curated a gallery of ready-to-use assets to simplify the game creation process. These assets are readily compatible with Zibra Liquid and Zibra Smoke & Fire, allowing you to achieve instant results without the need for manual effect creation.
Our Asset Library already includes a variety of assets compatible with Zibra Liquid and Zibra Smoke & Fire, and it's regularly updated with new additions. These assets are designed for immediate use — drag and drop them into your scene, and you're set to start.
Zibra AI Asset library offers a lot of benefits to game creators, specifically:
Quick start: Buy an asset and add it to the scene. Everything works out of the box.
Deep customization: Tweak and modify the asset to fit your needs. Easily change the simulation's appearance and/or behavior with vast customization options.
Cross-platform support: Set up a high-quality simulation for high-end PCs, and with simple parameters change, run a scaled-down version of the same simulation on mobile.
Flexible integration: Easily integrate the asset into any project with comprehensive URP, BRP, and HDRP render pipeline support.
Zibra Effects is a comprehensive product crafted to fulfill all your creative requirements. Created with versatility in mind, it combines Zibra Liquid and Zibra Smoke & Fire capabilities in one place, offering additional XR support.
In other words, it’s a game-changer for game dev teams struggling with realistic and immersive assets.
Zibra Effects unlocks the full potential of VR gaming. It enables the creation of stunning virtual worlds with life-like water features, flowing rivers, dynamic smoke, and intense fire simulations.
Players can interact with these authentic elements, deepening their sense of immersion and engagement in the virtual realm.
Apart from this, it opens the door for numerous virtual training possibilities. Whether it's firefighting scenarios or airplane simulations, Zibra Effects solution provides the tools to replicate real-world situations with authenticity and accuracy.
Trainees face realistic challenges like water-based obstacles, dynamic smoke, and fire simulations, which are crucial for honing skills and improving decision-making in high-pressure situations.
Zibra Liquid, Zibra Smoke & Fire, Zibra Effects and complementary assets for these tools are available on the Unity Asset Store.
Supported Configurations:
Editor platforms:
Windows
macOS
Build platforms:
Windows
UWP
macOS
iOS
Android
VR Support:
Unity version: 2021.3 or later (the latest patch version is recommended)
Supported Render Pipelines: Built-in RP (BRP), URP, HDRP
For console support reach out to hello@zibra.ai
Setting New Standards. How Zibra Effects' Advanced Real-Time Simulations Bring Liquid, Smoke, and Fire to Life in Gaming
Even if you are well-versed in various visual effects and fluid simulations, delving into Zibra Liquid, Zibra Smoke & Fire, and Zibra Effects may present you with new terms.
This is the Zibra Smoke & Fire Glossary, designed to familiarize you with the specific terminology used in our solution for the creation of volumetric effects of real-time simulated smoke and fire.
As our product evolves, this glossary will be updated with the latest terms and features
Manipulator — A component that interacts with the simulation in various ways. Different types of manipulators have specific functionalities and parameters.
Collider — A type of manipulator that enables collision with simulation. This manipulator is essential for adding realism to the simulation, as it allows elements like smoke and fire to collide with objects and surfaces within the simulated environment.
Emitter — A manipulator that emits fluid into the simulation. In the case of Smoke & Fire simulation, it emits smoke and fuel at a set temperature.
Void — A manipulator designed for precise control in fluid dynamics. It operates by creating pressure within an area defined by a Signed Distance Field (SDF), allowing for targeted manipulation of these elements. The Void manipulator also controls the rate of velocity decay, determining how quickly smoke and fire lose their speed within the influenced zone. Additionally, it regulates color decay, affecting how rapidly the color intensity of the smoke or fire changes.
Detector — A manipulator that detects fluid in the simulation. In the case of Zibra Smoke & Fire, it detects the amount of smoke, fuel, and heat. It can also position a point light in the center of fire emission to create the effect of fire illumination.
Force Field — A manipulator that applies force to fluid.
Simulation Grid — A grid that stores information about the simulation state. Higher grid resolution increases the simulation quality but decreases performance and also consumes more VRAM. When selecting a simulation volume game object, you’ll see a preview of the grid resolution.
SDF (Signed Distance Field) — In our simulations, this is the component that represents the shape of a manipulator. Each manipulator must have a shape to function, so each manipulator requires an SDF component.
Analytic SDF — A component that represents a simple shape, like a sphere, box, capsule, etc.
Neural SDF — A component that represents the shape of a complex static mesh. Neural SDFs use a proprietary representation and need to be generated beforehand for use.
Neural SDF representation — Compressed data generated for a Neural SDF containing the shape of a static mesh. It can only be generated in the editor, and the generation process occurs on Zibra AI’s servers. This data is decompressed on the fly, exactly when the simulation needs it, so that it’s stored in a compressed form in the VRAM. This allows for complex colliders in real-time simulations, which would otherwise require too much computational power.
Skinned mesh SDF — A representation of an animated skeletal mesh.
Invert SDF — An option that allows you to invert the shape of an SDF. For example, a collider with this option enabled will only allow fluid inside of it.
Visualize scene SDF — An option to visualize the shape of manipulators. It can be used for debugging.
Zibra Smoke & Fire Glossary
In the ever-evolving world of technology, a buzzword has been creating ripples lately – spatial computing. Imagine a future where your gaming experiences move beyond the confines of screens and rooms, where the virtual becomes an integral part of your reality. Welcome to a new age where innovation meets immersion, and we're here to give you a quick rundown of what it's all about.
Spatial computing is more than just a confluence of the digital and the physical; it represents a paradigm shift in how humans and computational systems coalesce. It seamlessly combines AR and VR principles, breaking the boundaries of both simultaneously.
While AR brings elements into our tangible surroundings and VR immerses users in an alternate environment, spatial computing dynamically interprets and interacts with the real-world context. Its core differentiation lies in real-time, environment-aware computation.
Sensor Fusion: Spatial computing leverages an array of sensors – from depth cameras to LiDAR to accelerometers. These sensors collectively gather exhaustive environmental data, mapping the physical space in terms of depth, orientation, and relative positioning.
Machine Learning & AI: Machine learning models and algorithms interpret and predict spatial relationships by harnessing the data from these sensors. Whether it's recognizing an object, understanding human intent from gestures, or predicting motion trajectories, ML and AI form the backbone of real-time spatial analysis.
Real-Time Computation: Beyond interpretation, spatial computing thrives on instantaneous feedback. This necessitates robust computational capabilities that process vast amounts of data in real time, ensuring the digital reacts and adapts to the physical seamlessly.
Unlike traditional AR, which might have a pre-defined digital overlay irrespective of nuanced environmental changes, spatial computing is adaptive.
It's not about a static overlay but a dynamic integration, where digital content modifies itself based on continuous environmental feedback, leading to unparalleled interactivity and realism. As computational hardware becomes more advanced and algorithms more refined, the horizons of spatial computing only seem to expand, offering a canvas rich with opportunities for tech innovators.
Spatial computing is rapidly redrawing the boundaries of gaming. Rather than merely augmenting the player's environment, it's about leveraging complex algorithms and real-time sensor feedback to craft a multi-dimensional interactive space.
Taking the example of games such as Pokemon GO and Minecraft Earth, there's a deceptive simplicity to their interfaces. Underneath, they utilize sophisticated algorithms. Pokemon GO, for instance, uses geospatial data combined with real-time player location tracking. It doesn't merely superimpose a Pokémon into a generic environment but rather processes intricate map data, understands geo-fenced zones, and even computes satellite information to offer a localized gaming experience.
Meanwhile, Minecraft Earth is not just placing block structures; it's utilizing ARKit and ARCore capabilities to ensure structures understand shadows, occlusion, and scale based on the user's environment.
Game developers equipped with such tools are on the cusp of a revolution. By incorporating this technology, they are moving beyond the traditional confines of screen-based gameplay, opening doors to dynamic real-time interactions where each session can potentially provide a unique encounter based on the player's surroundings.
Apart from gaming, the technology of extended reality is also gaining traction in other areas, used for various industrial applications, retail, and even the entertainment industry.
In the retail sector, the technology enables virtual try-ons, where consumers can visualize how clothes or accessories might look on them or how well various furniture pieces fit in their actual living spaces.
The entertainment industry is harnessing this technology to provide immersive experiences in theme parks and movies, offering audiences a deeper sense of immersion in the storylines.
Credits: https://www.instagram.com/reel/Cyi8o9frTb7/?igshid=YjVjNjZkNmFjNg%3D%3D
Meanwhile, in industrial applications, spatial computing is revolutionizing how businesses operate. It's instrumental in predictive maintenance, which significantly reduces pipeline production stalls. Modern machinery and factory setups have grown in complexity, necessitating advanced and intricate worker training. Spatial computing aids in streamlining this onboarding process, providing augmented instructions to workers in real-time, ensuring that they can navigate and operate sophisticated machinery with greater efficiency. Companies like Siemens, GE Digital, and ABB are at the forefront of this paradigm shift, harnessing spatial computing for a spectrum of applications, from machinery diagnostics to optimizing assembly lines.
The release of Apple Vision Pro has undoubtedly sent ripples through the tech community. With its advanced sensor fusion capabilities, combining LiDAR, depth sensing, and enhanced machine learning models, it's setting new standards for spatial recognition and real-time processing. Apple's deep integration approach, where hardware works hand-in-hand with software, is poised to provide unparalleled accuracy and responsiveness in spatial computing tasks.
For developers, the Apple Vision Pro toolkit is undeniably a game-changer. While its complexity and cost position it as a high bar for other devices, the granular data and Apple's formidable ecosystem ensure that it unveils the full potential of spatial computing for those who seek unmatched precision and immersion. This device isn't just about today; it's paving the way for other manufacturers towards the future, signaling where the industry is headed.
In summary, while gaming might be the most visceral and immediate application of spatial computing that grabs headlines, the real technical marvel lies in the breadth of its application and the depth of its integration, especially with powerhouses like Apple pushing the envelope.
Despite the unparalleled potential that spatial computing promises, it's not without its challenges that need to be overcome.
Infrastructure and Resources: High-quality spatial computing demands robust infrastructure. Developers often grapple with the need for enhanced computational power, real-time data processing capabilities, and high-bandwidth connectivity.
Content Creation: In today's landscape, the norm is to craft content within a 2D realm, bound by our screens. Yet, as we gaze into the horizon, the prospect of integrating AI and spatial computing is hard to grasp. Implementing such advanced techniques and systems will present challenges that are difficult to imagine even now.
Adaptability: Designing content that's fluid enough to adapt to varied real-world environments requires sophisticated algorithms and intricate design strategies, making the development process more complex than traditional game development.
Standardization and Interoperability: The spatial computing arena is still maturing, leading to a lack of standardized tools and protocols, which can pose integration challenges for developers.
However, even in the face of these obstacles, spatial computing has already showcased significant triumphs, emerging as a pivotal force in the next wave of technological breakthroughs. The technology has already showcased notable triumphs when it comes to:
Enhanced Player Engagement: Games leveraging spatial computing offer players a deeply immersive and interactive experience, resulting in increased engagement and retention.
Versatility: The adaptability of spatial computing means games can be tailored to individual environments, offering a unique gaming experience each time.
Real-world Integration: Successful blending of game mechanics with real-world environments, this is a testament to spatial computing's potential.
And there’s much more.
While gaming remains a flagship use case, spatial computing's application is expanding into various sectors. Enterprises now leverage this technology for efficient virtual training modules, allowing for a more interactive and hands-on learning experience. Virtual control mechanisms powered by spatial computing are becoming the norm in industries, facilitating precise and real-time remote management.
The industry's trajectory is evident. We're transitioning from the experimental phase to comprehensive integration, with this technology playing a pivotal role across numerous domains. Consider for instance its burgeoning impact in industrial solutions, transforming operations and processes, or educational technologies, where it enhances interactive learning and training.
In conclusion, despite its challenges, spatial computing stands at the forefront of technological evolution, and its diverse applications signal a future where digital and physical integrations are seamless and ubiquitous.
We at Zibra AI stay on the cutting edge, developing solutions to heighten the immersion of virtual worlds. Our focus is on creating tools to simplify the addition of content for all virtual worlds and experiences.
Real-time simulation solutions combined with spatial computing have the potential to revolutionize emergency training and industrial applications. Imagine scenarios where our simulated liquids, smoke, and fire interact with and adapt to the real-world environment, enhancing realism for users.
On another frontier, we are working on the Gen AI track, which aims to redefine content creation. Our unique ML layer harnesses the power of AI to generate production-ready 3D objects, PBR materials, and textures from various inputs such as text, videos, or image references. This not only reduces development time and costs but also empowers content creators by offering innovative avenues to manifest their visions.
When Gen AI aligns with spatial computing, the horizon broadens. The merging of these two can lead to innovative applications, such as a Furniture Design app that integrates existing real-world objects and allows users to generate and customize more. Or a Virtual Clothing Design platform where fashion designers can sculpt garments directly onto real human forms.
At Zibra AI, we are charting the course for a future where digital content creation and real-world interactions blend seamlessly, creating a new and exciting reality.
Read more here: https://effects.zibra.ai/
Merging realities: spatial computing and the next gaming revolution
Over the last few decades, developers all over the world have tried to find a way to create realistic and performant real-time fluid simulations. Several approaches naturally emerged from this effort. But is there a perfect solution? What is the difference between approaches? Which approach is used in Zibra Liquid and why? To understand it all, let’s start with the basics.
In general, approaches to fluid simulation can be divided into three main groups - Lagrangian, Eulerian, and hybrid (Lagrangian-Eulerian). Each of them has its strong and weak sides. So the first thing one needs to do when simulating matter is to decide how to describe what the matter is made up of in terms that can be worked on numerically. It can be either a point cloud, a discrete grid of values, or a mesh describing an area/volume of the surface you’re modeling.
The most common approaches used for simulating fluids in game dev are Eulerian. They are grid-based, ergo the simulation space is divided into grid cells where information of different fluid properties (velocity, density, temperature etc.) is stored.
These approaches allow for simulating large-scale effects quickly and are relatively easy to implement, but they also have several weaknesses. For example, to capture finer details, with Eulerian methods, one needs to use a grid with high resolution. There are also difficulties with advection. Eulerian methods are also often computationally more demanding than particle-based simulations. They suffer from high numerical dissipation, which manifests as unphysical viscosity, making modeling low viscosity fluids difficult.
Nevertheless, they are still used in a number of applications, for example, in the popular real-time fluid simulation tool for creating fire, smoke, and explosions for real-time VFX Artists, EmberGen, etc.
The Lagrangian approaches are particle-based. They work by simulating a large number of particles to approximate fluid molecules. The most popular Lagrangian methods are position-based dynamics and smoothed particle hydrodynamics.
In position-based dynamics, particles have speed positions, and they interact with potentials that imitate fluid.
Smoothed-particle hydrodynamics is a similar but less stable approach. It’s mostly used in scientific applications. As it doesn’t have the grid, it depicts fluid more accurately, but it, unfortunately, has lower resolution and is much slower. This can be explained by the fact that to make particles interact with each other, you have to create a computationally expensive acceleration structure.
Niagara, an VFX system for Unreal Engine, gives you a general toolbox to do simulations with particles. With it you can, for example, make a fluid simulation that employs the smoothed-particle hydrodynamics method. Developers create an acceleration structure for particles, which allows them to obtain the data from neighboring particles. But the need to loop over all neighboring particles slows down the solver.
Hybrid approaches combine the Eulerian and Lagrangian methods. They use a grid to exchange data between particles and particles to move the fluid forward.
In general, hybrid methods are much faster than Lagrangian approaches and allow for the generation of more particles. Instead of simulating 60,000 particles in position-based dynamics, they can simulate several million of them.
But the downside is that they are much harder to implement and have higher computational costs than the pure Eulerian methods because it is required to maintain both particles and a grid.
Overall, the number of liquid simulation approaches is quite significant. They have different uses. For example, there are also the Lattice-Boltzmann methods. They are often used for scientific purposes and simulate various physical phenomena such as condensation, vaporization, transition between different states, etc. These methods are very realistic, but very slow at the same time.
The most popular Lagrangian-Eulerian method is called Fluid-Implicit-Particle (FLIP). It is an adaptation to fluids of the implicit moment method for simulating plasmas, in which particles carry everything necessary to describe the fluid. Using the particle data, Lagrangian moment equations are solved on a grid.
The solutions are then used to advance the particle variables from timestep to timestep. It is used in various offline simulations, for example, in Blender or in a popular tool such as Houdini.
Apart from FLIP, there are also other methods. Among them is the Material Point Method (MPM) technique, and its improved version, the Moving Least Squares Material Point Method (MLS-MPM), from which our tool, Zibra Liquid, was born.
The main difference between Moving Least Squares Material Point Method and FLIP is that in FLIP, only the force affecting the particles is calculated on the grid, while the velocity is simply stored in the particle.
In the material point method, which was originally based on the Particle-In-Cell (PIC) method, particles do not have their own speed, and compute it from the grid instead. The values of force and speed are stored on the grid. This has its pros and cons. The plus side is that it makes the simulation much more stable. The downside is that it loses far more energy because the velocity transfer from particles to the grid loses the high-frequency details of the velocity. As a result, the energy of the liquid decreases.
MLS-MPM is based on the Affine Particle-In-Cell (APIC) method that uses a velocity gradient matrix describing how a fluid rotates, compresses, and expands around a particle, or more technically, a first order linear approximation of the velocity around the particle. As a result, more information about the velocity is transmitted, which helps to minimize the loss of energy to some extent.
Usually, physics simulations are done on the CPU while letting the GPU do all the graphics processing. But nowadays, GPUs are far more powerful than CPUs and allow for the implementation of general-purpose algorithms (GPGPU). Due to this fact, in Zibra Liquid, we decided to utilize the far larger capabilities of graphical processors to execute simulations on a bigger scale.
The algorithm itself works this way: there are two buffers - one buffer with particles and another with grid data, and both are on the video card.
First, we implement the part of the algorithm called particle to grid. For this purpose, we use a computer shader that reads the data from all particles, then goes through the nearby cells and, using atomic operations, writes into these cells the information about the amount of momentum and mass that every particle transfers as data to the grid.
When all the necessary data is transferred to the grid, we update our grid, receive the speed value, add the acceleration, if it’s necessary, gravitational force, and make an additional dispatch.
It means that we again go through the 27 cells closest to the particle, receive the new particle speed, and approximate the velocity gradient matrix around the particle, subsequently writing everything into the particle buffer.
Our algorithm also calculates interaction with complex objects using our custom neural object representation technology. This technology allows us to pack the Signed Distance Function of an object to save memory and unpack it on the fly directly on the GPU. To calculate interaction with the object, our algorithm evaluates the neural signed distance field function to know how far the particle is from the object. If it is close enough, it adds the force in the direction of normal, which can be computed from the SDF gradient, and thus makes the particles reflect off the object.
For rendering, Zibra Liquid uses two render algorithms. We implemented the first one, utilizing a custom atomic rasterizer and a jump flood algorithm to rasterize a massive number of spheres. After that, the spheres were blurred using a bilateral filter to get the final normals to render the liquid. So, simply put, it works by drawing the centers of the spheres and then extending these centers to neighboring pixels. Spheres grow out of them, like crystals from these seeds. This method allows drawing even 50 million spheres and scales exceptionally well for large amounts of particles.
The second algorithm currently being implemented in Zibra Liquid utilizes a mesh extraction and rendering algorithm. It takes the data from the simulation grid and produces a mesh using the Dual contouring algorithm.
The mesh itself consists of triangles, the number of which can vary depending on the amount of liquid. These triangles are adjusted by using gradient descent and then a Laplacian smoothing operation to better resemble the liquid’s surface. After that, our custom shader can render this mesh.
To properly visualize effects like refraction, full internal reflection, and transparency (absorption and scattering), we use a ray marching algorithm to approximately trace rays inside the liquid. This allows for a far greater level of realism, which is usually not seen in real-time liquid simulations.
Compared to particle rendering methods, rendering the mesh has the huge benefit of using a hardware rasterizer. It allows for much better scaling of the performance with the resolution. It also helps improve performance, which is crucial when dealing with, for example, mobile devices with a limited performance budget.
All of these combined attributes make Zibra Liquid a unique solution for small to midsize liquid simulations. It can simulate up to tens of millions of particles on high-end hardware, hundreds of thousands of particles on laptops, and tens of thousands of particles on mobile devices, all in real-time.
Every aspect of Zibra Liquid simulation pipeline is highly optimized. We don’t use open-source code. Everything we do aims to ensure the plugin’s compatibility with all hardware platforms and different APIs, as well as having maximum speed. Our API is meant for third-party engines from the get-go.The Plugin design allows for the scaling of the particle count, even enabling the export of a high-quality liquid mesh in the future. Thanks to its core technology, Zibra Liquid can easily become a framework for other visual effects.
If you have any ideas on how to improve our plugin, feel free to share them via mail, support@zibra.ai, or in our Discord community. In terms of development, we are following our roadmap, but your comments help us accurately choose the priority of our tasks.
Approaches to real-time fluid simulation in visual effects
Published as is from April, 21st, 2022.
On February 24th, Russia launched a full-scale war against Ukraine. In the months leading up to this, Ukrainian media actively published news of a possible invasion. Businesses contemplated over what to do in case the worst happened. IT companies were making investments in the military sector and thinking about a potential relocation. Nonetheless, no one was 100% ready for the war. Our startup Zibra AI, which develops technological solutions for the gaming market, also had plans in case of an invasion. Many of these had to be changed on the go. This is the story of how Zibra AI faced the news about the war, adapted to life under martial law, and what our company is planning to do in the future. Spoiler: Zibra AI promises not to leave Ukraine and do everything to strengthen the country’s economy.
Zibra AI is a Ukrainian startup created in 2020 by a group of developers who decided to use revolutionary AI approaches to accelerate game development. Namely, improve games graphics, optimize performance, and reduce the overall size of 3D content. This startup is part of the technological ecosystem formed by Roosh.
Currently, our project has a team of over 20 employees, who are working on several products simultaneously. One of the products which is already on the market is the Zibra Liquids plugin. It uses neural networks to create realistic real-time fluid simulations. Other products are being developed as we write this.
The majority of our employees usually work from an office located in the heart of Kyiv. However, after the war began, the team had to adapt to the new realities.
In the months leading up to the war, Zibra AI repeatedly discussed the threat of a possible Russian invasion. From the beginning of February, when more and more information about possible aggression began to circulate in the Ukrainian media, our company leadership organized regular meetings with its security department. They provided employees with up-to-date information on the situation in the country and an action plan for each potential scenario.
We considered the aggravation of the situation in the Joint Forces Operation zone (in Eastern Ukraine) to be the most probable turn of events. A full-scale Russian invasion was considered a more distant, unlikely, and almost apocalyptic scenario.
Nevertheless, Zibra AI offered everyone the possibility to relocate to safer regions of the country. Most of the team decided to stay in Kyiv and worked there until the beginning of the war. On February 23rd, our employees left the office with plans to return the next day. On February 24th – they woke up to explosions near their homes and reports on media and social channels that the war had begun.
Adapting to this situation wasn’t easy. On the first day of the war, there was a terrible panic in Kyiv.
However, the company’s management was able to get themselves together and make some critical decisions. On February 24th, the company paid all employees in full and provided additional financial assistance to those who needed it.
Simultaneously Roosh began to activate its evacuation plan for all employees, adapting where necessary to the quickly changing picture of reality. There was a group of employees that were responsible for the relocation; getting in contact with all the teams, finding transport for those in need, and a safe place to stay.
Starting from February 25th, our team gradually relocated from the capital to safer regions. Some employees returned to their parents’ homes, whilst others rented accommodations or hotel rooms in safe locations.
In the last few weeks, most of our team moved to Western Ukraine. Around 8% of employees decided to stay in Kyiv. Everyone has been trying to find their purpose, whether it’s in territorial defense, volunteering, launching informational campaigns, or putting in as much effort as they can into their work to support the Ukrainian economy.
Now Roosh is considering creating several local hubs that would allow all of its projects to set up workflows. The company is in search of premises in Western Ukraine.
War has left its mark. In the first month, the pace of work on the first Zibra AI product, Zibra Liquids plugin, and other projects became somewhat slower. Our company has had to move back a couple of deadlines.
Our priority is people and their safety. We resumed our work only after ensuring that everyone was safe and had the minimal comfortable conditions to continue working.
Alex Petrenko, CEO and Co-Founder of Zibra AI
But eventually, the team began to gradually return to routine. Now we resumed work in full capacity and continue to actively improve our flagship product. This month, Zibra AI released a new version of the Zibra Liquids plugin, supporting the Unreal Engine 4. Now everyone can apply for early access completely free of charge. Work on Android, VR, and AR support is also underway.
In their free time, our team is pushing several volunteer initiatives. Members of Zibra AI work on information campaigns, particularly on how Ukrainians should behave abroad, coordinate humanitarian aid to the Territorial Defense, and work on other projects.
Despite the war in the country, Zibra AI continues to grow. Financial indicators are getting better, and the number of customers increases every day.
In regard to customers, we see positive dynamics. This is mostly because our target audience has wide geography – we sell to the whole world, except for one country.
Kostyantyn Tymoschuk, Head of Growth at Zibra AI
After the beginning of the full-scale Russian war against Ukraine, ZibraAI ceased to deal with Russia.
Even in these trying times, our team strives to take care of its clients as much as possible. Customer care is one of Zibra AI’s core values.
“People working on the company’s products are very fond of the technologies they create. They dedicate their free time apart from product development to communication with our clients. This initiative comes from them. The company supports and recognizes this approach at the level of the strategic vision. However, this attitude is primarily based on the employees’ personal values.”
says Kostyantyn.
We have a Discord community. It consists of 2500+ members from around the world. In this community, members of our team communicate with customers, help them solve specific technical issues, receive feedback, and discuss ideas together.
For example, in the first days of the war, developers Dima Bulatov and Mykhailo Moroz answered questions about the work of the Zibra Liquids plugin from the bomb shelter.
It is currently difficult to predict precisely how the war in Ukraine will unfold and when it will end. However, no matter what happens, and wherever the company’s offices will be in the future, our team will continue to help the country in all possible ways.
ZibraAI has already found a balance between volunteering and work and resumed all operations, hoping that it will help strengthen the Ukrainian economy and assist the county in these trying times.
After all, business is the future of the country’s economy, and it needs to evolve and maintain the country’s defense capabilities, among other things.
Roman Mogilny, COO of Zibra AI.
How does young Ukrainian startup ZibraAI adapt to work during wartime?
Even if you have experience working with different visual effects or fluid simulations, you will still encounter some terms specific to our tool. To make your life easier, we compiled a Zibra Liquid Glossary that should help you get used to our asset. This list will be expanded as we update our product.
Force interaction – Is a feature that enables liquid to push colliders. Without Force interaction, colliders apply force to the liquid, but not the other way around. This feature is only available in the full version.
Force field – Is a feature that allows you to apply force to the liquid in various ways. Force fields have different types that define how exactly the force is applied. Radial force field pulls or pushes liquid to/from a specified point. Directional force fields force liquid in a specified direction. A Swirl force field forces liquid to rotate around it.
Neural SDF representation – Is compressed data generated for a Neural collider containing the shape of an object. It can only be generated in the editor, and the generation process happens on Zibra AI’s servers. This data gets decompressed on the fly, exactly when liquid simulation needs it so that it’s stored in compressed form in the VRAM. Thanks to that, we can have complex liquid colliders in the real-time simulation, which would otherwise require too much computational power.
Liquid collider – Is a сollider component that defines the shape of an object for the purposes of the liquid collisions. Zibra Liquid supports two types of colliders – both analytical and neural.
Analytic collider – Is a liquid collider that allows you to use simple shapes (cubes, spheres, capsules, torus and cylinders) as colliders for the liquid.
Neural collider – Is a liquid collider that allows you to use any static mesh as a collider for liquid. Neural colliders use our custom Neural SDF representation, and need to be generated from the static mesh in the editor before they can be used.
Liquid manipulator – Is an object that defines user interaction with the liquid. Zibra Liquid has several types of manipulators: Emitter, Void, Detector, and Force Field.
Liquid emitter – Is a manipulator that creates new particles in the simulation. You need an emitter to get any liquid in the simulation. When you first create Zibra Liquid, one Emitter is automatically created and added to it. Apart from adding liquid, the emitter also counts the number of emitted particles per frame and their total amount (the counter is only available in the full version).
Liquid void – Is a manipulator that deletes particles inside it. It also counts the number of deleted particles per frame and their total amount (the counter is only available in the full version).
Liquid detector – Is a manipulator that detects how many liquid particles are in the specified volume (only available in the full version).
Simulation Grid – Is a grid that stores information about the simulation state, and allows particles to interact with each other. Higher grid resolution increases the simulation quality but decreases performance and also consumes more VRAM. When selecting liquid’s game object, you’ll see a preview of the grid resolution.
Liquid particle – Is an abstract primitive that represents a very small amount of liquid that cannot be subdivided. The whole simulation consists of many liquid particles.
Initial state – Is a state of the liquid in a single point of time, that can be used as the starting point of liquid simulation.
Rendering mode – Is a rendering method Zibra Liquid uses to render liquid to the screen. There are multiple options with different visual, performance and compatibility characteristics:
Liquid ray marching – this is a process of marching through the liquid, simulating the path of the light inside. It can also be called liquid ray tracing. In Zibra Liquid we use software ray marching in the case of the “Mesh render” rendering mode. We don’t use RTX or DXR (yet).
Visualize scene SDF – is an option to visualize liquid colliders. It can be used for debugging. This option works only when the Rendering mode is set to particle render.
Invert SDF – Is a feature that inverts any liquid collider. For example, if you invert the SDF of a sphere, the liquid will only be allowed inside the sphere and not the other way around.
Zibra Liquid Glossary (Common Zibra Liquid terms)
Volumetric data has numerous important applications in computer graphics and VFX production. It’s used for volume rendering, fluid simulation, fracture simulation, modeling with implicit surfaces, etc. However, this data is not so easy to work with. In most cases, volumetric data is represented on spatially uniform, regular 3D grids. Although dense regular grids are convenient for several reasons, they have one major drawback – their memory footprint grows cubically with respect to grid resolution.
OpenVDB format, developed by DreamWorksAnimation, partially solves this issue by storing voxel data in a tree-like data structure that allows the creation of sparse volumes. The beauty behind this system is that it completely ignores empty cells, which drastically decreases memory and disk usage, simultaneously making the rendering of volumes much faster.
First introduced in 2012, nowadays OpenVDB is commonly applied in simulation tools such as Houdini, EmberGen, Blender, and used in feature film production for creating realistic volumetric images. This format, however, lacks the GPUs support and can not be applied in games due to the considerable file size (on average at least a few Gigabytes) and computational effort required to render 3D volumes.
To bring high-quality VFXs to game development, another approach is usually applied. Artists simulate volumetric effects in Houdini, Blender, or other tools and then export it into flipbooks, simple 2D textures, that imitate the look of the 3D effect. These textures weigh approximately 16Mb-30Mb and can be rendered in game engines in real-time. However, they have several traits that make them lack realism and visual quality.
First, flipbooks are baked from one camera view, which makes it hard to reuse them in a game many times or make a long-lasting effect that looks realistic from a moving point of view. Secondly, as these textures are baked into a game, they are non-interactive with game environments. Using them, it’s hard to achieve the same level of realism that could be met with high-quality VDB effects.
Several attempts have been made to fix the issue. One of them – NanoVDB, NVIDIA’s version of the OpenVDB library. This solution offers one significant advantage over OpenVDB, namely GPU support. It accelerates processes such as filtering, volume rendering, collision detection, ray tracing, etc., and allows you to generate and load complex special effects much faster than OpenVDB. Nevertheless, the NanoVDB structure does not significantly compress volume size. Therefore, it’s not so commonly applied in game development.
Nowadays, when powerful consumer GPUs have lifted existing limitations for game developers, gamers expect more realistic and engaging games. ZibraVDB is the newest Zibra AI solution, being developed to bring film-quality VFX into games with GPU-powered compressed VDB effects.
Born from a custom AI-based technology, it makes it possible to:
Our VDB compression solution also opens new possibilities for realistic scene lighting. With our tech, you can use light data from VFX to light up a scene, add reflections, etc, making your game much more immersive and true to life.
ZibraVDB is aimed to work with channels needed for rendering, specifically density, heat, and temperature. It’s a lossy compression, meaning that there is always a trade-off between the quality and the size of the visual effect.
Alex Puchka, Technical Director at Zibra AI.
However, we are working on ensuring that our tech provides the highest compression rate and minimal visible difference between compressed and decompressed VFX.
In this example, you can see the original and compressed version of the same visual effect. Compressed 16,46 times, it has a 37,80 peak signal-to-noise ratio. F1 Score – 0,92.
Our solution can be integrated into Unity, Unreal Engine, or any custom game engine. With ZibraVDB, you can compress even the most heavy visual effects so that they can be used in your project, without drastically sacrificing quality or performance, and bringing your game to a completely new level.
All you have to do is:
ZibraVDB test plugin for Unreal Engine 5 has reached an impressive frame rate of 120 fps when rendering on an RTX 3090. Depending on the hardware, the frame rate may differ. This level of speed in rendering is primarily due to our innovative approach to working with compressed data. It efficiently stores all the essential information related to the original VDB volume in a highly optimized format.
ZibraVDB is currently being polished. We are still improving the compression rate-quality ratio and optimizing our approach to ensure it fully corresponds to the industry requirements, but are getting ready to release our newest tool as soon as possible. ZibraVDB is a new tool in the Zibra AI ecosystem of complementary AI-assisted solutions for virtual content creation.
All existing Zibra AI solutions are designed to simplify the process of creating content for games and also improve their quality. Zibra Liquids and Zibra Smoke & Fire real-time simulation tools allow game developers to add interactive and dynamic visuals to their projects, and build game mechanics, even for mobile games. ZibraVDB enables using lightweight OpenVDB in the game for those working with baked effects. Click here to learn about all Zibra AI products.
ZibraVDB – a new solution, bringing groundbreaking OpenVDB format to game development
VDB sequences enable users to create novel cinematic experiences for both games and virtual production projects, but they come with significant challenges: high memory usage and costly volume rendering.
ZibraVDB addresses these issues by compressing OpenVDB files to fit into GPU memory and offering an efficient real-time renderer that scales from high-end to low-end GPUs.
“Car Accident” — is a demo project, created by the Zibra AI team, designed to demonstrate ZibraVDB capabilities for the creation of next-gen immersive experiences in real-time scenarios.
In this demo, we used UE 5.4 and the ZibraVDB plugin to compress and render VDB sequences in real time. Initially optimized for mid-level GPUs like the 4070Ti (achieving 60 FPS in 2K), we further refined it for PS5-level GPUs, such as the AMD RX5700XT, showcasing ZibraVDB's capabilities for games.
ZibraVDB is a plugin for Unreal Engine for both Virtual Production studios using real-time workflows and for game development companies seeking to integrate optimal volumetric VFX into their projects.
ZibraVDB efficiently replays large VFX volumes stored in OpenVDB format within Unreal, delivering up to 100x compression and rendering times twice as fast as Unreal's SVT. It enables the use of realistic volumetric effects for both cinematics and gameplay, offering a more dynamic and immersive visual experience compared to traditional flat flipbooks.
With ZibraVDB, game developers can elevate their games with volumetric VFX, creating realistic close-up scenes. Additionally, ZibraVDB allows for more OpenVDB effects without increasing disk space usage, ensuring optimal render performance and significantly enhancing the visual quality of games. You can read more about ZibraVDB here.
OpenVDB effects are never used in real time due to the intensive computational resources required:
High Memory Usage: OpenVDB sequences require a significant amount of memory to store and process. This can be particularly problematic for tech artists who need to work with complex scenes and high-resolution volumetric effects. The high memory footprint can lead to slowdowns and limit the ability to iterate quickly, as scenes take longer to load and modify.
Costly Volume Rendering: Rendering volumetric data, such as smoke, fire, and clouds, in real time, is computationally expensive. Traditional rendering methods consume substantial processing power, leading to lower frame rates and suboptimal performance, especially on mid- to low-end hardware. This can be a major bottleneck for tech artists who need to maintain high visual fidelity while ensuring smooth performance.
Bandwidth Bottlenecks: Unreal Engine’s standard volumetric rendering can create bandwidth bottlenecks, particularly when streaming large VDB sequences from memory to the GPU. This can cause frame rate drops and hinder the ability to achieve real-time rendering, which is crucial for interactive applications and live previews.
As a result, we have created a demo scene where the final optimized version runs at 60+ FPS on an AMD RX5700XT in Full HD. The VDB rendering takes 4ms per frame.
While further compression could enhance performance, we prioritized maintaining the high quality of the original sequence.
Additionally, we plan to introduce new features to further optimize the effects:
⚫️ Support for Downscaled Channels: For instance, downscaling the temperature channel can significantly reduce size, without any noticeable visual impact.
⚫️ Separate Compression Quality for Each Channel: Different channels require varying levels of detail. Compressing channels individually will improve the overall compression rate.
ZibraVDB Showcase — Car Accident Demo: Real-Time VDB Rendering for Games
Hydrolab is a game that combines water physics with augmented reality (AR) for a unique, real-time puzzle experience. It's made using Unity, allowing players to interact with water in a way that's central to the gameplay.
It was developed by Lee Vermeulen, co-founder of Alientrap Games, who leverages VR and AR technologies to redefine gaming aesthetics and mechanics.
Zibra Effects is a multipurpose, no-code toolset designed to simplify VFX creation and make virtual worlds more immersive by leveraging real-time simulation and realistic physics.
It helps make video game worlds feel real by using advanced simulations and realistic actions. We offer a special technology that helps create life-like effects of liquids, smoke, and fire quickly and easily.
You can use these effects in different kinds of projects, including games on computers, phones, and gaming consoles, as well as in movies and mixed reality projects. Learn more about Zibra Liquid, Zibra Smoke & Fire, and Zibra Effects and what you can do with them here.
The process of making AR games, especially those with complex liquid simulations, is full of technical hurdles. It requires achieving realistic liquid rendering within the AR space, while achieving high levels of performance.
For Hydrolab, the ambition to simulate realistic liquid behaviors within an interactive environment required a toolset capable of detailed, high-performance rendering without compromising on gameplay fluidity (no pun intended).
Lee needed a solution that could seamlessly integrate with Unity, support AR projects, and provide an authentic liquid simulation that players could interact with in real-time.
The process of making AR games, especially those with complex liquid simulations, is full of technical hurdles. It requires achieving realistic liquid rendering within the AR space, while achieving high levels of performance.
For Hydrolab, the ambition to simulate realistic liquid behaviors within an interactive environment required a toolset capable of detailed, high-performance rendering without compromising on gameplay fluidity (no pun intended).
Lee needed a solution that could seamlessly integrate with Unity, support AR projects, and provide an authentic liquid simulation that players could interact with in real-time.
For Hydrolab, Lee hasn’t topped with just liquids. He also announced smoke and fire effects in the game environment, which is also available in the Zibra Effects package that includes previously showcased liquid simulation.
Just another thing we're adding to the game is smoke physics. Also made by Zibra AI, the developers who made the liquid physics package. This is just uh cool smoke effects, and we're trying to add it into the game.
For Hydrolab, Lee hasn’t topped with just liquids. He also announced smoke and fire effects in the game environment, which is also available in the Zibra Effects package that includes previously showcased liquid simulation.
Just another thing we're adding to the game is smoke physics. Also made by Zibra AI, the developers who made the liquid physics package. This is just uh cool smoke effects, and we're trying to add it into the game.
Hydrolab: Navigating Realistic Liquid Physics in AR Gaming
Echoes of Somewhere is an ambitious 2.5D point and click adventure game anthology series in development, depicting a dystopian future, where human society is ruled by machines.
The project aims to create immersive experiences utilizing AI-assisted graphics and storytelling.
One of the visual elements the creators sought to incorporate was realistic, interactive smoke, liquid & fire effects to enhance the game's atmosphere and authenticity.
Zibra Liquid and Zibra Smoke & Fire played a key role in enhancing the Echoes of Somewhere’s visuals, making them more realistic and immersive.
The creators faced challenges in developing special effects like smoke and water that were both high-quality and efficient for different gaming devices.
Zibra AI solutions solved these issues. Zibra Smoke & Fire introduced realistic smoke effects that fit the game's environment and make it look more natural using only a modest amount of frame budget.
Additionally, Zibra Liquid enhanced the game's liquid elements, and allowed the developers to create water that responds to the player’s actions and interacts with its surroundings in a believable manner.
Take a more detailed look at the full creator's journey in Jussi Kemppainen's blog.
More about Zibra Effects
Zibra Effects is a multipurpose, no-code toolset designed to simplify VFX creation and make virtual worlds more immersive by leveraging real-time simulation and realistic physics.
Powered by a custom physics solver and unique technology for neural object representations, it makes it possible to easily create realistic, high-quality VFX based on real-time simulated liquid, smoke and fire.
These effects can be used on small to medium scale and applied for various use cases, from PC, mobile, and console games to cinematics and even MR projects.
Read more about Zibra Liquid, Zibra Smoke & Fire, Zibra Effects and their potential applications here.
A further challenge was to include liquid simulation. In the first episode of Echoes of Somewhere, the story is focused on water.
The developer wanted to use realistic water effects to create custom game mechanics based on liquid control, where water reacts to player actions and interacts with its surroundings in real-time.
Creating VFX for games, artists have to consider various challenges. First, games have to run smoothly on a wide range of hardware. This means VFX artists have to create effects that look good and perform well on lower-end hardware, which can limit visual fidelity for owners of high-end hardware.
VFX can also significantly weight down the asset size, drastically increasing game storage requirements.
Echoes of Somewhere is a game that needs to run on both low and high-end PCs
According to one of the game creators, Jussi-Petteri Kemppainen, the initial approach to creating the desired smoke effect in Echoes of Somewhere involved using EmberGen, JangaFX real-time fluid simulation software.
The process required exporting pre-rendered flip-book animations.
However, flipbooks can be limiting in terms of fluid simulation duration. Additionally, as far as these textures are baked into the game, limited interaction between the fluid simulations and the 3D game environment was also a concern.
Fine-tuning the visuals mainly involved matching the smoke color to the scene. Zibra Smoke & Fire has many built-in parameters that enable users to control the look and behavior of the simulation according to their needs.
The smoke's lighting was adjusted using a directional point light, a part of Zibra Smoke & Fire unique lighting system.
The end result was a realistic, full-screen smoke simulation with complex scene interactions that added only about 2 ms to the frame times, showcasing Zibra Smoke & Fire performance efficiency.
To achieve natural-looking water, the developer tweaked several liquid parameters, such as the index of refraction, scattering, and absorption.
A dark color was chosen for the water, and the simulation volume resolution was increased to make the water behave more naturally.
“When I was tweaking the options, I found out that the most natural look comes when the index of refraction was set unrealistically low and the scattering to 0 and the absorption to some low value. With a dark color, this made the water look pretty natural to me. You can also render the water as a normal mesh with any custom material you like, but I found out that this made the performance dramatically worse“.
Shadows were added to the scene to give depth and realism to the water simulations. A reflection probe was also incorporated to enhance the visual fidelity of the water by reflecting the environment.
“I duplicated my special shader version of the scene mesh (that has the custom shadow renderer and placed a simple default lit shader on it and changed the render to shadows only. This is to make the environment cast shadows on the player character when it is moving in the scene. It now also casts shadows on itself as well, but it seemed to work ok as long as I lined up the light with the shadows that were in the AI image. The fluid rendering also requires a reflection probe, so I placed on in the scene and created an otherwise invisible mesh contraption that fills the probe with something for it to reflect.”
To adjust the behavior of the simulation, the creator also set up the volume resolution parameter.
“Getting the water to act naturally requires the simulation volume resolution to be quite high. In this scene there also is quite a lot of water, so the volume had to be rather large. I was pretty surprised to see this scene run in Unity editor at 100fps even on my M1 Max. on my PC with 2080 it easily surpasses 250fps. This is pretty bonkers!”
Enhancing Game Visuals with Zibra Liquid and Zibra Smoke & Fire - A creator's journey