Let's optimize the way the 3D clients perceive cyberspace from server-side simulation. Let's also do it with ReST concepts. Do you doubt it can be done? You probably wonder how to copy the entire dataset of the scene from server to client to display it, yet that would be the mundane assumption. Let's skip that step entirely, as that doesn't follow General Relativity. We want to let clients to quickly peek into a server-side simulation and observe only a specific array of perception that relates to those clients.
What's the trick? It's not a new idea or secret, yet an idea that just hasn't been effectively used by the graphic card industry. They haven't had any reason to use it since their market isn't about client/server architecture. That may change as people opt-in to server-side games, such as
otoy and
OnLive. The physics in otoy and OnLive, however, are optimized for games with static content. Otoy becomes basically a remote controller rather than a way to do optimized, concurrent simulations for client/servers technology. Let's remove the remote-controlled-stream technology here, also, from the mundane assumption.
The first mind bender to this concept is to first turn ray-tacers into physical volume detectors that return a unique object identifier for a casted ray that has optional threshold through density. Where that ray stops is where the object identity to return. That's it nothing more.
The ray would start from a given origin within a bounded topology (like typical 3D cubical simulation) and stop where it either detects an object of greater density than the threshold. Unlike your ray-tracer that continues to bend and refract rays, we only want the straightest ray possible until that threshold is triggered. If nothing is detected, then a null identity, like a zeroed UUID, can be used to mean full transparency. Don't assume zeroed means void, or general-relativity optimizations won't work.