Tuesday, July 27, 2010

Optimizing Simulations with basic General Relativity for Clients/Server Architecture

Let's optimize the way the 3D clients perceive cyberspace from server-side simulation. Let's also do it with ReST concepts. Do you doubt it can be done? You probably wonder how to copy the entire dataset of the scene from server to client to display it, yet that would be the mundane assumption. Let's skip that step entirely, as that doesn't follow General Relativity. We want to let clients to quickly peek into a server-side simulation and observe only a specific array of perception that relates to those clients.

Footage Of Crysis Running On The iPad
What's the trick? It's not a new idea or secret, yet an idea that just hasn't been effectively used by the graphic card industry. They haven't had any reason to use it since their market isn't about client/server architecture. That may change as people opt-in to server-side games, such as otoy and OnLive. The physics in otoy and OnLive, however, are optimized for games with static content. Otoy becomes basically a remote controller rather than a way to do optimized, concurrent simulations for client/servers technology. Let's remove the remote-controlled-stream technology here, also, from the mundane assumption.
C# based Ray-Tracer


The first mind bender to this concept is to first turn ray-tacers into physical volume detectors that return a unique object identifier for a casted ray that has optional threshold through density. Where that ray stops is where the object identity to return. That's it nothing more.


The ray would start from a given origin within a bounded topology (like typical 3D cubical simulation) and stop where it either detects an object of greater density than the threshold. Unlike your ray-tracer that continues to bend and refract rays, we only want the straightest ray possible until that threshold is triggered. If nothing is detected, then a null identity, like a zeroed UUID, can be used to mean full transparency. Don't assume zeroed means void, or general-relativity optimizations won't work.


LSL script can do a part of this idea, without the threshold, with its llCastRay() function. LlCastRay() can help us learn concepts on how to apply general realitivity in order to optimize simulations in client/server architecture.

We can let the server sample parts of the scene with an array llCastRay()s from a point, and then we could pass those IDs to the client. The client could continue to process that data and apply further detail to the scene.

Sounds easy?


These IDs, maybe a set of Universally Unique Identifiers, would be significant rather than the data you would normally expect from a ray-tracer. Instead of Red, Green, Blue, Luminance, Reflectivity, Glossiness, Bumpiness, Refraction, or other data, the UUID would refer to more static data. With these array of UUIDs we could easily render an entire scene without the need to even cast rays at every cubical centimeter of the scene.  With these UUIDs, we can cast the fewest amount of rays, yet render a scene more detailed then the rays we casted.

Easiest pick for the first ray-cast direction is to start in the center. Then we could expect that ray to return some object UUID. Here is the catch to the trick. Either that UUID relates to a mesh dataset we already know or we have yet to retrieve information about. If we already know that mesh dataset, would could render it based on its coordinates and size. We could further use its volume data to determine the where not to cast the next ray. We skip over knowns, and retrieve unknowns if possible.

For example, if the camera is directed at  a wall larger than its viewport, then a single raycast could determine this and render the entire scene. If the wall is smaller than the viewport, then the next raycast would be in an area not already covered by the wall.

We can continue to apply this technique to sample data of a scene from a server, gather objects IDs, retrieve mesh data as needed (with active cache), and feed the data to our current graphics cards. This extra method to use ray traces to gather the scene distinct from the more common technique to subcribe to all data in front of a camera view for a given distance. In this ray-cast techniqeue we only subscribe to data that is returned by ray-casts.

Is it fast? Let's compare how it scales. In Second Life there are hundreds of servers that each take up a 256sqm of virtual land. If you can pan and zoom this map at SLURL to get a size of it.

Normally, a client viewer only sees from 64m to 512m away. It grabs all objects within that area even if they don't appear on the screen because they are covered up by objects it also grabbed that are in front of them. If there are no objects within that distance, then nothing but beautiful sky is rendered as background. If the viewer is set to 64m, then it'll basically draw a background beyond that even if there is an object at 65m away. As you can tell, there are many objects being grabbed that may never actually need to be rendered. Lets say for example you are inside a Linden Home. It may start to render those objects 512m away before the viewer finally determines it only needed to render the objects within a closed room. The server only needed to send the data for those objects within the room. How about in open distance, what if you wanted to see a complete render of an entire mainland. That would be almost like a complete database download from several hundred severs to your drive. It might finally figure out the correct scene order to display after a few million datasets are transfered.

With a ray-casted optimization, let's not worry about 512m limit at all. I'll probably want to see the furthest with the a centered ray-cast. Lets let the ray-cast happen and follow it. The viewer sends the ray-cast request as a single a ReST request. It creates that query with the simulator of where that ray-cast starts. That ray is then casted by the simulator by its vector. Let says it didn't detect anything from that origin to the edge of the simulator, so that simulator then connects with the next simulator across the edge to continue to process the cast. The process repeats with the new origin being the edge between these two sims. The next simulator casts the ray with the same vector but with new origin.  If it doesn't detect in objects, then this process continues onto the next simulator, and so forth until an object is finally detected. That detected object might have happened 100 simulators away, or 256,000m in distance. That last simulator then sends the reply to the client about the object that was detected.

How much data was sent to the client in order to detect an object 256,000m away? Is this scalable? Remember I started to talk about density thresholds, so haven't even gotten into discussion about further techniques we could apply with this one simple basic step.

Those who are keen on quantum mechanics and haven't gotten lost in special relativity probably already know how to apply such density threshold as a quantum force. That reply sent from the last simulator might easily include a net force. Above is the thought that mainly visual data is being talked about in regards to the data from the ray-cast and mesh dataset. What if we applied physics data to the ray cast. One sim can send a ray-cast of with a threshold, and anywhere that ray-cast collides with (or detects) an object then it applies a negative force (equal to the density of the object detected) to that threshold, and the ray-cast continues until it has a zero or negative threshold. Only the final detected object is returned to the viewer. Every simulator passed through could subtract from the original threshold until negative. This would simulate the affects of atomosphere.

One sim maybe filled with clouds, so it subtracts heavily from the threshold. Another near to it maybe clear and sunny. Both sims could appear in the client viewer at a far distance, yet one lets you see beyond it while the other doesn't.

This is very simple QM, yet physcists probably would argue with this. Let's end with a comparison to understand this distinction. Take a ray-casted photo with llCastRay(), like in the video above, and keep a snapshot. Now, move all the pieces around, and finally move them back to the original spot from the previous snapshot. Take another ray-casted photo with llCastRay(). If there is no difference between the two snapshot photos, then there was no change. If there was no change, then it isn't "physics" (which is about "change", aha!). Simulators may seem meta-physical, meaning "beyond change," in this example, so maybe this ray-cast optimization noted in this blog could be considered meta-simulated.

Enjoy.