Polygons Versus Points – The Coming Tech Revolution


by VRift720


Euclideon Unlimited-Detail Engine:  This entire cathedral, inside and out, was scanned (at 64 atoms per cubic millimeter) and then assembled in the computer and is running in real time at 120 FPS!   Zoom in as far as you want, the detail is always there!

Square Spacer


      Augmented Reality and Virtual Reality have no need to compete as they are different products serving different needs.  Anyone who buys a VR device is just as likely to want an AR device too, because they have entirely different functions.  But there is a new participant now emerging in the war for supremacy of the PC and HMD visual mediums, that is to say a new rendering system coming out of left field … where no one has even been looking.  If it weren’t for a bit of controversy back in 2012 when it was first announced, we might’ve been totally blindsided by this evolutionary thinker’s new work. 

      The emerging graphics system concerns a man named Bruce Dell, a programming genius whose life’s work, from the age of 13 when he started on it, has been to lift the entire gaming industry up to a new standard of fidelity so complete that it IS reality (from a visual stand point).  He developed a new engine that can render all the detail that exists in Nature … in real time.  It would be perfect for AR and VR, because with its rendering capabilities, it could be made to even run on smart phones, and it is tens of times faster than nVidia’s best card … even in its relative infancy.  It is my belief that within 15 years, every visual-display device in existence could be powered by Euclideon’s radical invention.  Here are some examples of some amazing locations entirely recreated using only points which are rendered in real time and at the speeds needed for VR:


Objects this organic and round would require billions of polygons and take up more memory than any single PC can even hold.   But it’s easy for Euclideon’s Unlimited-Detail Engine.

Or this one:


Real-world objects are scanned in, so models don’t need to be made by hand.  All of the texturing is contained in the scan, so creating textures is no longer needed.  Artists can still tweak, though, since tweaking is always needed.  Tweaking, not twerking…

Or this one:


Show me a PC that could render this scene in such high detail without becoming a brick.  Euclideon Engine renders it now at 120 FPS.

      Bruce Dell has created a new standard for graphics that will eventually replace the current standard, but um yeahhhh, that is not going to be an easy feat to accomplish!  A lot of people, companies, and CEO’s are heavily invested in the polygon.  It has been the best method so far.  But while CGI factories can produce nearly realistic 3D visuals today, the gaming industry cannot do the same with its games, not in real time.  That requires radically different approaches, and nearly all the ideas, innovative thinking, and concepts formed around the polygon have been perfected about as far as they can be.  And still we are not there.  Therefore (with a strong need to improve but no more room for growth in the polygon era) the the Age of the Pixel, the Point, and the Point Cloud have finally come.  Something must emerge to finally replace what has finally reached it’s limitations, or else hardware of the future would come to cost more than your house and make it impractical for the average user to afford, that person for whom gaming has been designed all along.

      What it boils down to … is a revolution, where points replace polygons because they are infinitely smaller and require almost no memory or processing power to produce them ultra fast onto displays.  Only the pixels needed at render time can be grabbed via Bruce Dell’s insanely-smart indexing system (a combination of Google’s search algorithm and Mandelbrot’s infinitely-zooming imagery) and then textured appropriately.  But when done right, you can use your mother’s worn-down old laptop and still play real-time graphics on it that look like this image (seen below).


Euclideon’s Tech:  zoom in all the way, the image holds at full fidelity; zoom out and see the whole forest at once, something that would normally take 64 GB loaded into memory!

      There are no images in this image’s background (above), the background is actual trees, rocks, bark, grass, and more trees … for as far as the eye can see.  It it all naturally lit by the sun but now in shadow due to the density of the foliage in this setting.  Actual overhead foliage blocks the light as it would realistically.  Everything you see is a real object (scanned in) and placed on the ground in the background.  You can walk up close or even lean in as far as you want toward any object you see.  If this scene was being rendered in VR, for example, you could get down on your hands and knees (using a Vive, maybe) and put your eyes right up to the stonework of those stairs and see all the grain of real stone in its infinite complexity.  There would never be any stretch marks or blocks or bits in the imagery you see, just more detail as far down as you go.  “It’s turtles … all the way down, Quentin.” (wink).


         There are trillions of dollars invested in the polygon, the current system for creating visuals, but whose time is nearly over.  This is because the returns given are almost reaching the limit of what can be achieved (let alone within the average consumer’s budget) based on how much the new cards are going to cost.  Moore’s Law no longer applies for polygons where all the advancements are already built in.  And as such, the time for a new method is here.  And right on cue, the method needed to replace the polygon is already here too. 

      Some of you might say “poppycock” upon hearing this, but let’s look at the facts, shall we?  In order for nVidia to really push the boundaries of what the polygon can do, they had to spend almost 1.5 billion in research to create their latest graphics card.  1.5 billion!  Will they even sell enough cards to even justify that amount of research?  Maybe.  Maybe they will.  But they won’t do that again, not in the next 10-15 years where MUCH GREATER visual fidelity is already needed.  All of the ideas for the polygon are in use, and short of building a rack of GPU’s in your garage to throw more processing at the problem, nothing more can be done.  Only time will tell, but there is no more time.  Greater fidelity is needed now, in smaller and smaller form factors, and Euclideon’s method would work well with those.

      It is my belief that nVidia is throwing money at the problem in the hopes of keeping the industry itself alive for 5-10 more years at best, because the industry is worth billions per year, and all those artists need to keep their jobs because their art rocks.  They create the worlds we play in, let’s give them the props they deserve!  But yeah, that is a $-staggering-$ number needed in polygon research to even get marginal advancements (as nVidia tries desperately to wrench the last few drops out of the polygon’s potential).   I think that tells you a lot right there. 


      But how do you tell millions of experienced graphics designers that all of their tools, their knowledge, that investment … all gets chucked out the window, if not now, soon?  Well, you, don’t… you just build the next system around them… hidden out of sight.  And announce it when it’s finally ready in 2016 in Australia, as Bruce Dell has just done. 

      In partnership with the Australian government, Bruce has opened what he calls the “Holoverse” (image above) first in Australia and then around the world.  The media is referring to them as hologram centers.  These centers use his proprietary rendering technology which no longer relies on polygons, but on points.  Or more accurately pixels, since the pixel is the actual measure by which fidelity is measured in his particular real-time-graphics system.  For video, it’s different than in graphics:  video quality is based on a video’s resolution and its data-encoding rate.


This image cannot reveal the true fidelity this individual is seeing in, which is here much higher than what the current polygon system can achieve.  This is built on points, not polygons, and the resolution could be 16K.

      For example, to do event decent 360 videos, 8K fidelity is going to be required.  But 16K will be required for stereoscopic 360 videos of decent quality.  Even the up-and-coming standard, 4K for 2016, will still be too weak with standard 360 videos.  And for stereoscopic video, they could still turn out to be pretty blurry messes without 16K, even when encoded with the highest-possible data rates. 

      The point system Bruce developed is ultra fast, but it runs in real time, so the only limitation to the fidelity is the amount of pixels you choose to render your game in.  Samsung proposes to have a 12K phone screen ready by 2019.  And this system by Bruce Dell may be the only system capable of actually running anything at full speed in that resolution.  The reason such fidelity is possible for Dell’s system (at the high speeds needed for VR) is that he has created an “indexing system” for all the point-cloud data that works just like Google when you run a word search, only this one works on points.


2016 Hologram Centers (with 40 booths per Center) are opening in Australia, then coming world wide. Uses holographic visuals with higher-fidelity imagery than any current VR system in the world, including Oculus or Vive.

       The Australian government has helped Bruce to open several Hologram Centers (pictured above).  Is it really a hologram, though?  I disagree with the term “hologram” used by the media, however, as holograms are actual objects made of light created in the real world that need no glasses to be able to see because they actually exist, while augmented reality provides similar imagery but which DO require glasses to see the generated imagery.  The Holoverse uses glasses to create stereoscopy using the light reflected off each booth’s walls.  So I feel this is more akin to Augmented Reality, but let’s not squabble on semantics.  The main point is that the glasses don’t have any resolution associated with them, so whatever resolution the Centers choose to display in, it can handle it.  They have the ability to render in 8K, or EVEN 16K, if there was a projector capable of transmitting that resolution available nowHere is an image from some simple games created for the Centers:


Because these fish models are made up of a living (animating) point-cloud groupings, they are not loaded into memory like polygonal instances are.  You can have an infinite number of them because the screen resolution is the only limiting factor here.


      The two systems now in play are the polygon system and the new Euclideon point-cloud system.  The polygon system uses AR and VR going forward, attempting its best to render realistic graphics on smaller and smaller devices, trying to achieve Presence and squeeze even more out of the polygon system until it literally reaches the end of its rope.  The other system is the point-cloud system, using points, or pixels.  Only the pixel needed for display is ray-traced based on its distance from the camera and the screen resolution.  All the other points are hidden away on the hard-drive inside the complex data-indexing system which drives Euclideon’s technology.

      What is the difference between Augmented Reality and Virtual Reality?  Virtual Reality (VR) is about bringing your mind out of the real world into a simulation of the whatever kind of world the developer desires to bring you, from fantastical RPG-based lands to cybernetic warfare zones in space, the skythe Universe is now the limit.  VR is where new and exciting things can happen that can’t happen in the real world:  you can become a wizard with a magic ring that shoots electricity and who lives in a giant toad … if that’s the direction a developer wants to go.  The CPU and GPU of the HMD in question here are directly responsible for everything you see and hear inside this simulated reality.  Much power is needed, therefore.  But with Augmented Reality (AR), it’s about augmenting actual reality with simulated stereoscopic objects that appear to be outside, in reality with you.  The CPU and GPU here are only needed for the items that are overlaying your reality such as a fancy HUD display with information about, say, some zoo animals you are seeing, so AR’s processing workload is considerably less than for VR systems which must do it all.

      With AR systems, you could put on your AR glasses and receive a guided tour as you walk through your local zoo, with glowing-dotted lines created along the actual path showing you where to walk to find the animals you want to see the most.  And when you do find them, informational displays could sprout up around them, overlaying reality, as a soothing voice in your ear tells you all about the animals in question.  And these AR objects are smart enough not to appear over people, so as people walk through your scene, the overlay which appears in the distance is stenciled out behind the people to preserve reality as much as possible and give some authenticity to the created objects actually being in reality with you.


Every item here was scanned from an original object or piece of art and you can see it on any laptop in real-time using Euclideon’s Engine.

      All of these systems, however, rely on industry standards.  They rely on things like Android and Windows for operating systems, things like Unity for their API and for the tools to develop games.  And all of those things rely on the industry standard of polygons which help to create the underlying meshes by which all things in games and applications in VR and AR are finally created.  The polygon has been at the heart of the games industry since its inception, since the first 3D game was ever made until now.   And in order to produce stunningly realistic worlds, organic-looking worlds, game companies have had to create more and more and more of these polygons in strikingly smaller dimensions than ever before, taking up massive amounts of memory to hold them all. And incredible GPUs capable of using more and more of them per second.  And that has led us to a place where future progress is almost impossible … with the polygon. 

      One of the premiere graphics-rendering companies, nVidia, just released it’s most powerful GPU to date: the Geforce GTX 1080.  And, while it is truly amazing, it’s still a long way from being able to render truly realistic visuals on par with actual reality.  And even if it could, that’s usually only from a certain distance away, too.  Whenever you get too close to the walls or floors of any environment, the graphic fidelity breaks down under such scrutiny.  You still cannot zoom into polygon environments and keep a sense of life-like fidelity.   And with VR, and motion tracking, the ability to get down on the floor and lean in for a closer look is something many people enjoy doing.  To want to get a sense of the world around us is a natural part of our existence, but with polygons and with texture limits being a function of how many assets are loaded in the scene, there are limits to what can be done.  Sadly, we are still a long way from being able to break through that glass ceiling … at least … by the modern graphics industry.

      But this is now the Age of Aquarius (wink) and if what those New-Age thinkers believe has anything to do with it or not, big changes were promised in the New Age, changes concerning expanded consciousness and the growth of the human mind.  And what better expansion of consciousness could there be than the ideas, insights, and mental experiences which are used to drive VR and AR?  There is so much to learn, so much to achieve, that literally the next 20 years will be a non-stop race between rival companies like Google, Facebook, Apple, and Microsoft.  Every achievement they make puts them ahead of their rivals in terms of wealth and power, while the outcome of such discoveries also benefit the end user’s overall experience in VR and AR.  It is win win.  With these forces, AR and VR are going to bring the “fiction” to science, here in the real world.  Science fiction is a reality we live in right now, thanks to VR.


Bruce Dell, Genius Behind Euclideon’s Infinite-Detail Engine


      As with the year 2012, the year when Aquarius officially began, it was then that the world was introduced to Bruce Dell, a lone programming genius who had developed a new way to do graphics.  While point-cloud systems were already out there, they were cumbersome, and required 20-million dollar servers, full of computer racks, to barely do them at all.  Points had already been extensively thought about, but had never actually been tackled by anyone yet … and never in such an imaginative way.  But the world judged him mad, based on his claims that the geometry saga had been defeated and was now DOA thanks to his ability to create “unlimited geometry”.  People were up in arms and people cropped up from all parts of the internet to declare the man’s work a hoax.  He decided to not talk any more about his invention from this, and went into seclusion, but wild forces were at play and would not let him go dormant. 

      Instead, money came in from all corners of Australia, and many of the offers Bruce had to turn down because some of those would require selling out to foreign investors requiring Euclideon to leave Australia.  Bruce decided to stay true to his homeland, and the Australian government later decided to reward him with money and opportunities to work with the best business minds in the country.  They focused their Engine toward existing Point-Cloud companies who had high-end tycoons for clients.  And out from the scrutiny and skepticism, he built a small but talented team which flourished. 

      After the intense and early ridicule, finally Euclideon was green-lit for the creation of the Hologram Centers, which would have graphics far in excess of what any current VR systems can generate or will be able to for years to come.  Bruce’s work can create breath-taking scenes of pure organic terrain that can be studied up close or from any distance and admired as if taking a hike through an actual forest.  Yes, it takes enormous data stored on the hard drive to do this, but you can load into your game nearly instantly.  There is absolutely no loading time.  You go from click to game play in milliseconds.



      Bruce Dell’s idea was that since reality is made up of points (called atoms) which rays of light bounce off of to produce what we see, that video-game graphics could be made of the same thing instead of polygon.  But it was the answer to his question that may have produced the best results here.  The question was the true genius in its infancy, as all good questions are.

     (An approximated guess as to what Bruce Dell may have been thinking, NOT a quote):

  “I want to render points.  I want each point to have its own texturing (its own attributes) and I want to be able to render them ULTRA FAST.  How could I render all of these points fast enough, given that there would be so many of them to make up real-world objects? … (Which then leads to the thought):  What if I didn’t have to render all those points?  What if I only had to get one point for each pixel on the screen?  What Engine could do all of these things?”  And this is the Engine Bruce Dell’s imagination cooked up in reply to him.

  • Point-Cloud Scans can do the actual modeling and texturing in unlimited details, given that we can scan in whatever density we want.  For example, if we scan rocks at 64 atoms per cubic millimeter, then it would take a 16K screen zoomed all the way in to see between the points.  That means at 4K or 8K, it would work flawlessly.

  • Google’s Search Algorithm hides words within words, contextualizing data somewhat like a complex file folder system.  If the word is somehow related to other words based on a number of factors, then it can be inside of the word you are searching but it could be a long way off and not show up in the result for 100 pages. 

  • Mandelbrot Set images can fill your screen to your chosen resolution, and show detail for every pixel on the screen.  But when you zoom in, you don’t lose detail, you just see the detail hidden inside the detail you already were seeing.  The mathematical model allows for more and more detail … all the way down.


A Courtyard with life-like trees, weathered walls, life-like cobblestones, each individually placed.  Displacement mapping isn’t required, as the rocks and stones are truly modeled and therefore are actually deformed.

      It is my theory that Bruce Dell combined these elements to formulate his Unlimited Engine.  It is not actually unlimited, it is only limited by the amount of pixels you choose to render your game in.  If you had an 8K screen then his Engine could render in that and provide even more detail, but at a slightly slower rendering speed … as you are greatly increasing the number of pixels you have to search for per frame.


      As far as I go on the internet for my research, I see nothing but hate for point-cloud data theory.  Nobody seems to understand what Bruce Dell has done here, it’s not the old point-cloud system, it’s an all-new system.  Once he explained his idea of how Google obtains search results so fast, using an indexing system, it started to dawn on me.  And I think I can actually explain this to a lay person.  If you consider yourself a lay person, and you actually managed to read this far into a lengthy article (for which I thank and praise you) then let me know if this explanation actually makes sense to you.  Here goes.

      The main consensus out there is that if anyone tried to render all those points, they’d bog down any good GPU/CPU in the details and turn their graphics card into a postcard generator, at the horrendous rate of something like one postcard per minute.  Haha!  That’s too funny.  (That means 1 Frame Per Minute, just in case you didn’t catch me, haha.)

      But what he did was to invent an indexing system for point-cloud data so that he could install all the data on one’s hard drive, pre-indexed and saved in a kind of sorting system.  The indexing system assigns codes to groups of data, and then like a file folder system, puts further layers of detail inside top-level folders based (I believe) on the distance of a given object from the camera based on screen resolution.

       NOTE:  For the purposes of this portion of this explanation, I will refer to one demonstration pixel as “File A“.  It represents one pixel on a screen of a certain resolution, hidden inside of which (like a file folder) are more numbers for File AFile A could be seen as File A0 (or one point on the screen).

      So the camera sees only one pixel from File A at a given distance because it’s aimed toward the camera and nothing else is closer or in the way.  Now walk closer to that pixel and File A opens up to show Files A1-A5.  Go closer still and Files A1-A5 become Files A100-A500 (thus taking up more and more of the screen, as getting closer to them would).  You only ever see the points in those filing folders as you get closer to the “File A” that was originally only one pixel on your screen.  Any objects behind File A never need to be seen since File A is in front of those files (pixels).  They are not searched for or loaded, and no textures ever have to be reserved for them just in case because only the pixels needed are ever solved for.  Anything behind File A does not exist to the search algorithm.  Now just replicate this same routine for every other pixel on the screen. 

      For example, in the image two paragraphs below, that large tree in the middle-top might be considered a grouping something like Files A1000-A9999 at this current proximity and screen resolution.  But move back thirty feet and it might be more like Files A100-A499.  The tree you are seeing is thus the index grouping at that proximity and screen resolution.  To see the tree fill the screen, it might look something like A1,000,000 – A3,999,999.   And the entire tree seen as far away as it could still exist on your screen as a single pixel … would be–> File A0The ray-tracer only sends a beam to see where it will intersect first, with File A, or File B, or File Z, whichever is closest to the camera.  It treats the entire scene like one big topographical map that starts with the closest objects and renders back in the Z plane only when necessary to find something to fill a pixel with.  Does it make sense now?

      With point-cloud data, you can scan at whatever rate you desire for the ultimate fidelity possible.  For example, in the following image (rendered live in Euclideon’s Unlimited-Detail Engine at 120 FPS), everything was scanned at 64 atoms per cubic millimeter:



      Bruce Dell’s technology is truly revolutionary, but keep in mind it’s still in its infancy, being several decades behind the polygon industry in pure human ambition and creativity.  There are literally millions of people in the polygon-creation industries around the world, with many lifetimes of experience, skills, and know how, and thousands of tools, and powerful game systems like Unity, all which are not ready to throw in the towel just yet.  The future, being inclusive of VR, certainly does require that shift in its fundamental approach to game and world-building design, and points are as good an idea for how to do that as there is yet available to us.  Schopenhauer knew this best:  nothing in Nature works like Nature … but something built upon Nature’s principles.  There’s really nothing more fundamental to the make up of our world than the atom, the point, the particle.  These are the smallest things still able to interact with light.  From small seeds come big ideas.

      Because of how entrenched and how awesomely capable the current polygon industry is, we can safely say that for the next 5-10 years, that old system will hold sway and continue to dominate the market.  But once Euclideon has the man-power and resources it needs to amp up what the Euclideon Engine can do to compete against the engorged polygon industry, we may begin to see the Revolution I have been speaking of here occurring before our very eyes.  And what will that look like?  Imagine this…

      You put on your VR glasses (in the future many experts feel that HMD’s will shrink down to become glasses, and even further out, contact lenses, before ever nearing the jacking-in experienced in the Matrix) and load up your VR Racing Game.  The entire five-mile track is rendered in excruciating point-cloud detail, tweaked by artists with every embellishment needed to sell the illusion.  You hear the race clock counting down and you rev your engine, and many other real-life players imitate you and rev theirs even louder.  The green light flares … and you are off in a squeal of tire smoke as bits of your car’s tires actually get chewed off and burnt up to provide the smoke!  This point-cloud world, now embedded with physics simulations, can do almost anything now, as it would occur in Nature.  You just build it and everything is worked out.


      Here in this image you can see a top-down view of the track you are racing on.  It’s not a hand-drawn map or a crude CGI rendering, it’s a real camera view looking down over the entire length of track where you are currently racing.  The Infinite-Detail image can jump between a bird’s eye view of 100 GB of terrain data instantly, unlike traditional polygon systems.  Every detail is still there, there is never any loading or pop-in from shrubs or trees.  Everything is always there, just like reality.  There is no Level-of-Detail swapping.  All the detail your resolution can provide is always there.  The rendering engine is going 120 FPS in stereoscopic for both eyes.  It’s absolutely incredible. 

      To return to your car, you can either click a button to jump back instantly… or you can click the special “zoom-me-down” button that modern games would never dare to try to include.  You click that one and see yourself plunge down right out of the sky toward your car seat, plunking down with a rush of adrenaline, because the Infinite-Detail Engine just zoomed you down from sky to ground level at full speed without any hiccups, pops, or even jutter.  At this level of detail and FPS, motion sickness has also been greatly reduced. 

      Now back in your car, you round a corner and bank hard on the turns, watching nervously as bits of the road (also made up of points) crack and fall apart because those points can be pulverized, chipped off, destroyed piece by piece in new ways current systems can’t pull off. 


      Look!  There’s a straight away, you can now go 120 miles per hour for a few seconds, and it actually feels like that because the Engine has the throughput capacity to handle that much terrain data no matter what you throw at it without ever any visual hiccups, glitches, or hang ups.

      You break suddenly.  The road on this part of the track (image below) is still under construction, the cliff-side dirt and grass could break away and slide down the bank and overwhelm your car if you aren’t careful.  Those demolished white stones have been piled up along the road side (top-left corner of image above) and there are several trucks and cars lining the other side, rendered in full fidelity, absolutely indistinguishable from actual vehicles. 


      And just look at all of that wear and tear and black sooty road, just like in real life!  This has always been the hardest illusion to pull off for modern artists, the look not of the perfect, but of the imperfect, just like in real life.  Weathering a road this complex and this long (5 miles) would take a team of artists months or a year to pull off, but with point-cloud scanning, these road visuals were achieved in hours and look way more realistic than any artist could ever obtain under the pressures of game development schedules. 

      What’s more, the game (and this scene) loads in milliseconds after you click start.  Normally, this much terrain data would be slowly loaded into memory for up to four minutes, but Euclideon searches the hard drive only for the number of pixels needed to fill the screen, using their exclusive indexing system.   They gather just the pixels that make up what you see on the screen, and they can do this 120 times per second even operating under 8K or 16K resolutions!  

      You marvel at this virtual reality before you, unable to tell it from actual reality, and at this level of detail, your fear hormones, adrenaline, and emotions are all amped up to insane levels that make playing this game feel just as risky as doing it for real.  This is the greatest game you’ve ever played, because when you are in this kind of point-cloud-based VR, you really do feel like you’re really there more than ever before…. 


      This is the achievement of Bruce Dell, and a gift to the world that (except in the case of the Holoverse centers) we just cannot quite take hold of … at least not yet. 

      That’s because we all have just … a BIT more business to settle with the polygon industry first.

Thank you for your time and for reading this very long article.  Unlike those “TL;DR” people who can’t read anything more than a twitter post any more, you are clearly the best.

Square Spacer


  2. VIDEO TWO (CLICK “” … )

    Square Spacer






Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s