Oh. I thought it was a hybrid camera-IR system. Didn't they say something about room-based positional tracking as well as the LIDAR?
They said that you can just use more beacons to connect areas together, implying that the cameras somehow just figure it out. Maybe you have to calibrate them, manually inputting their world positions relative to each other.
Honestly, Lighthouse is way over my head. I'm pretty sure it's just witchcraft.
In the tested interview they pretty explicitly said the Lighthouse sensors are not proper cameras, I had assumed that meant they're just on/off sensors with the laser pulses being modulated over time in order to create identifying patterns. They also said that its much simpler mathematically which to me says they aren't any sort of camera/sensor hybrid thing, as once you get a camera in there you pretty much require the sort of image processing techniques that oculus/sony are using now. I imagine the wall detection thing will be from detecting reflections of the laser pulse off of walls.
That said, they've also said that the position of the sensors needs to be insanely precise, I don't remember numbers but that sort of thing probably ups the price a fair amount.
Lighthouse is literally just a laser speed-trap gun except instead of one laser pulse to measure an object's speed it uses an array to fill a room and track an object (which is why you need two of them). For accuracy purposes and being able to track the specific thing you want to track with this array the headset/controller basically just have reflectors on them (that would shine up really noticeably bright to the array)
Its basically just a hyper-low-latency and accurate version of kinect
[QUOTE=KorJax;47361979]Lighthouse is literally just a laser speed-trap gun except instead of one laser pulse to measure an object's speed it uses an array to fill a room and track an object (which is why you need two of them). For accuracy purposes and being able to track the specific thing you want to track with this array the headset/controller basically just have reflectors on them (that would shine up really noticeably bright to the array)[/QUOTE]
They're not reflectors. They're simple light sensors. The object being tracked 'knows' where each of these sensors is, and can determine its position based on when a sensors lights up. Should work with even just one beacon, but more accurate and robust with two.
Cliff Bleszinski has a pretty nice perspective on how he thinks VR is going to fall in a small-medium-large format much like every other medium: [url]http://dudehugespeaks.tumblr.com/post/114162826447/vr-small-medium-large[/url]
[QUOTE=Clavus;47364562]Cliff Bleszinski has a pretty nice perspective on how he thinks VR is going to fall in a small-medium-large format much like every other medium: [url]http://dudehugespeaks.tumblr.com/post/114162826447/vr-small-medium-large[/url][/QUOTE]
I think we're seeing this already, it's just a logical conclusion. Not everyone's going to want or be able to afford the highest-end stuff, so an average consumer tier will need to be produced. And then you'll want one for the kids, so that necessitates a lower-tier product that's less expensive and less powerful.
We already have a semi-tiered system. GearVR is clearly the casual, entry-level VR market, Oculus sits somewhere in the middle, and HTC is going all-out. I expect that trend to continue.
Palmer slams down the fact about 4K Rift.
[quote=some guy]4K Samsung screen could be the killer feature they are waiting for to be integrated. In the mean time they finetune all non-screen related stuff.[/quote]
[quote=some other guy]I know I'll get downvoted for saying it again but you guys need to give up on the '4k for CV1' chant. Resolution isn't the solution you hope it is. You need a top end PC to even hope getting 90fps, the screens will be stupidly expensive ($100+ each) and 90% of people will just double the pixels so they can actually play a game, losing any advantage. As long as there is no screen door you don't need 4k...
4k will come (along with the tech to drive it), just be patient. The killer feature (right now) is, obviously, hand tracking... my fingers (and eyes) can wait.[/quote]
[quote=Palmer]Expense is not the issue. The hardware simply does not exist.[/quote]
Also:
[quote=Palmer's answer to "Could Oculus be implementing AR mode to the CV1"]Nope.
AR is hard, harder than most other hard things. Camera technology is not nearly advanced enough to do it properly on an immersive display, which is why you see most dedicated AR hardware going for an optically transparent approach. Things along the lines of the GearVR camera passthrough can be useful tools, but not pleasant to use for any length of time.
Even basic passthrough is not as simple as putting two cameras on the headset - not only is the IPD wrong, the translation is completely off. Stereoscopic video cannot accurately depict a "real" 3D scene, it is just a neat trick that can provide a similar effect under very specific constraints.[/quote]
[QUOTE=Orkel;47367068]Palmer slams down the fact about 4K Rift.[/QUOTE]
Took a dive in his recent post history, has some more interesting stuff:
[quote]There is going to be software that is exclusive to the Rift, some of our first party content especially. We have been spending time and money on software for our system for years now, it is not "best for VR" for us to spend those finite resources compromising around lowest common denominator feature sets in an attempt to support all headsets.
Other companies will do the same, creating and funding content that is designed around the strengths of their particular system. Most software developers will end up supporting all available headsets to some degree, but you can bet on VR hardware companies (headset, input, capture, and otherwise) funding development of things that show off the cutting edge - expect that to accelerate as things like eye tracking, body tracking, emotional state sensing, and other technologies start to become part of VR hardware, and accelerate further as competition drives people in different directions. It is hard for any dev (especially bigger, slower moving devs) to spend their own resources on new technologies before they are proven out, and that is true even for the relatively limited VR tech that exists today.
P.S. The Rift is not closed.[/quote]
[quote]> They need to be a walled garden (in the same way that consoles are) to make money.
We don't.
> They have to make their money by charging developers for licenses to write software for the rift.
We won't. That would cripple VR development, especially for indies. Anyone who tries is going to lose a lot of devs to other platforms overnight.[/quote]
I thought 4K screens did exist though in a phone format?
Albeit, they probably aren't the right kind of screen that would work for the rift (low latency/OLED/etc).
[QUOTE=Orkel;47359058]UE4 got support for the Leap passthrough AR.
[img]http://i.imgur.com/ozbhr3E.gif[/img][/QUOTE]
oh my god, finally leap motion is getting a lot more support now.
I spent 100$ on it but it's been sitting in my electronics chest since I did.
Lighthouse lasers at 1/8th speed: [url]https://twitter.com/joeludwig/status/579516434163724288[/url]
[editline]22nd March 2015[/editline]
This animation should give an idea of how the two lasers from one base station scan around the room: [url]http://i.imgur.com/0SacEa0.gifv[/url]
So it's basically a supermarket scanner repurposed for positional mapping?
[QUOTE=woolio1;47374929]So it's basically a supermarket scanner repurposed for positional mapping?[/QUOTE]
Science is cool.
From what I can understand, Lighthouse works on a similar concept to old touch screens from the mid 80's and early 90's. Those worked by having an array of IR lights along the edges of the entire screen, and when a portion of the beams were broken at an intersect, it would register a touch on the screen. It's just, instead of an array of lights there's what, one light that spins? That's kinda like how radar works. When the IR beam is broken within the space it's given, it goes "Hey I need to track this" and the next time the beam hits it, software just calculates the difference in translation. Radar works similarly, only it doesn't use infrared, it uses radio. When the radio waves bounce off of an object, the radar receives the bounceback and from that it's capable of determining the position and distance of the object.
But that explanation is for only one plane of movement. I don't know a lot about Lighthouse but I'd imagine you need at least three or four of them to create proper intersects, higher accuracy, and reduced latency.
EDIT
Just spent some time looking into it and it looks like my assumptions are correct. It's not black magic, it's just lights.
That's not really it. As the beam sweeps over the headset, each of the sensors on the headset registers a pulse. Knowing the rate of the beam sweep, the relative positions of the sensors, and the exact time of each recorded pulse, you can solve for the position and orientation of the headset.
For unambiguous tracking, you just need one beam to be able to see a certain number of sensors, probably about 5 or 6. Having more Lighthouses just helps to prevent loss of tracking due to occlusion.
The concepts involved aren't all spectacularly novel; for instance GPS works by transmitting precise timing information from >= 4 satellites of known position, which your GPS receiver can then use to solve for your position on the Earth and the current time, so methods will exist in the literature for doing those sorts of calculations. What's impressive isn't that any of this is black magic, but that they have come up with a workable method, and done the engineering, to make realtime sub-millimeter tracking in a room-scale volume practical.
Some days I wake up wondering when we'll be in 'the future'.
And then I realize we already are. I'm typing this on a Surface Pro 3 which is, for all intents and purposes, a magical stone tablet which pulls information out of the ether (via a nearby wierless router) where most of the human existence is recorded in real time.
Then, when I get out of bed and go work (via an electronic application which provides a virtual office for this company which has no physical location in the real world) I'll be using another magical stone tablet which can translate pressure and hand movements into beautiful art pieces. Next year, I might be wearing a head mounted display and painting in the middle of a crackling, mountainous, thunderscape for hours at a time using a 3d brush. [Seriously, I want a VR ZBrush app]
When this technology all shrinks down and becomes wireless I've come to the conclusion that we'll literally be wizards to any outside observer, drawing strange sigils in the air with our fingers to command the ether and seeing what can't be seen by the naked eye while speaking to invisible presences half a world away. "Any sufficiently advanced technology is indistinguishable from magic" indeed.
Gods, am I excited for the next 5 years in technological development.
[QUOTE=Firgof Umbra;47379915]Some days I wake up wondering when we'll be in 'the future'.
And then I realize we already are. I'm typing this on a Surface Pro 3 which is, for all intents and purposes, a magical stone tablet which pulls information out of the ether (via a nearby wierless router) where most of the human existence is recorded in real time.
Then, when I get out of bed and go work (via an electronic application which provides a virtual office for this company which has no physical location in the real world) I'll be using another magical stone tablet which can translate pressure and hand movements into beautiful art pieces. Next year, I might be wearing a head mounted display and painting in the middle of a crackling, mountainous, thunderscape for hours at a time using a 3d brush. [Seriously, I want a VR ZBrush app]
When this technology all shrinks down and becomes wireless I've come to the conclusion that we'll literally be wizards to any outside observer, drawing strange sigils in the air with our fingers to command the ether and seeing what can't be seen by the naked eye while speaking to invisible presences half a world away. "Any sufficiently advanced technology is indistinguishable from magic" indeed.
Gods, am I excited for the next 5 years in technological development.[/QUOTE]
And that's just the beginning. Imagine where we'll be in ten, fifteen, twenty years? Imagine where we'll be when we retire?
The world's going to get real interesting real quick.
[QUOTE=Clavus;47373136]Lighthouse lasers at 1/8th speed: [url]https://twitter.com/joeludwig/status/579516434163724288[/url]
[editline]22nd March 2015[/editline]
This animation should give an idea of how the two lasers from one base station scan around the room: [url]http://i.imgur.com/0SacEa0.gifv[/url][/QUOTE]
I just realised this is a 3d scanner.
Saving real objects to model files would just need the right software.
no it isn't unless you want to cover your model in 1000 sensors
He's not entirely wrong...
He's sort of wrong, but the concept is clean. You've got a rapidly-rotating supermarket scanner, which projects horizontal lines of lasers across the room really fast. In principle, that's a similar setup to an actual 3D scanner. Provided you had a high-speed, high-resolution camera capable of seeing the beams as they refract off the surfaces of whatever you're trying to scan, and provided you had the software capable of turning that data into vertices in virtual space, Lighthouse operates in principle like a 3D scanner.
Then again, that only works if what you're trying to scan is a spherical cow in a vacuum.
[QUOTE=woolio1;47384119]He's not entirely wrong...
He's sort of wrong, but the concept is clean. You've got a rapidly-rotating supermarket scanner, which projects horizontal lines of lasers across the room really fast. In principle, that's a similar setup to an actual 3D scanner. Provided you had a high-speed, high-resolution camera capable of seeing the beams as they refract off the surfaces of whatever you're trying to scan, and provided you had the software capable of turning that data into vertices in virtual space, Lighthouse operates in principle like a 3D scanner.
Then again, that only works if what you're trying to scan is a spherical cow in a vacuum.[/QUOTE]
supermarket scanners use the lasers to determine simple patterns infront of them, and lighthouse does not, and does not use cameras
they are completely different
it is not at all like a scanner
Will my DK2 work with TF2 or will there be issues like no head tracking?
TF2 has crazy amounts of VR stuff in it, actually. Lots of experiments with what your head controls in it. The main issue you might encounter in playing TF2 is that it might be hard to keep up with everyone else on the server because you're going to be a little less reactive/superhuman.
You might have to go hunting through console commands to enable it though. Haven't checked in on that since it was first put into the game.
[QUOTE=bitches;47384375]supermarket scanners use the lasers to determine simple patterns infront of them, and lighthouse does not, and does not use cameras
they are completely different
it is not at all like a scanner[/QUOTE]
You... do realize I'm using the "supermarket scanner" thing as a joke, right?
Look it's very simple
[img]http://i.imgur.com/0jJ8O9W.png[/img]
Boss and I were yacking a lot today about potential projects for the Vive using the Starflare engine (the engine that powers Star Ruler 2).
Personally, I think we could definitely at least make the game a spectacle by virtue of what the engine can handle in realtime.
[thumb]http://i.imgur.com/taj93Uk.jpg[/thumb]
Made me wonder - what exactly will the upper limits of VR be in the next 3 years if we've already come this far in two years.
I'd love to play a big-ass Mount and Blade in VR. I'd pay $70 - no, $100 for just the game. What big projects are in development right now?
There's gotta be more 'bombastic spectacle' than just EvE: Valkyrie coming out, right?
Sorry, you need to Log In to post a reply to this thread.