Because why should I press a button to go fullscreen, then turn my phone sideways when I could do it all in one action? The only time it's an issue is when I'm laying on my side in bed, which is maybe 30 minutes of the hours I use my phone.
You're the best. But you already knew that...
If the fixed-function units + higher bandwidth is the only good way of increasing RT perf then it's worrying as transistors won't be scaling in size or speed anytime soon.
I just thought, portal would be a great game for raytracing.
On your phone? OLED display's literally have no backlight. Also saw a friend use some Pixel Saver app thingy that turns half the pixels off in a mesh. You lose some brightness, but until like 75% off, the quality loss isn't that bad.
FP has always been terrible to browse on mobile devices. I haven't even tried reading it recently, as my phone sucks and so on. I just browse FP on my laptop.
Also the arms fall asleep and cold arms arguments are just bad. Do you live in a walk-in fridge or something? Currently in my apartment we don't have central heat yet and the temperatures inside dropped to 19-20C or so. Outside it is 5-10c, going colder in coming days. I have no problems staying warm under a blanket currently.
That cold thing reminds me of how my setup at home totals probably at almost 1kw. When I am cold, gaming heats up the room pretty good. Should probably buy a Killawatt and see exactly what the power usage is, just for fun. Wouldn't even call power hungry things a waste even, they are the additional central heating system.
There was a blue light filter option I had that was either part of an older Android version or the ROM I was using, but after I upgraded to Oreo, it disappeared. It kind of helped with the brightness issue, but I just suck it up now. The worst is getting flashbanged by Facepunch when the FP app is loading pages, it flashes white before the dark CSS is applied.
I live in Florida, it could very well be 90f+ (32c+) any day of the year, and we run our AC 24/7/365 here. After 3am, mine is set to go to 65f (18c), with ceiling fan on high, and air vent directed at my bed from the ceiling. Heating is a myth here except maybe a week out of the year.
My Killawatt is reading ~600w under load, <250w idle. They are really cool to have. (I don't even want to know what you're doing to pull 1kw)
My note 8 is on Oreo, I have a blue light filter.
Could also be part of the "Samsung Experience" though.
Stock android has it. I had it since I got my OG pixel afaik.
I really don't know how close to 1kw I could be. Let's think.
I have a 650w PSU, but probably only using ~400w there. I have 3 old LCD screens, they all had 75W max stickers, but probably run ~50w each. That's ~550w. My speakers consist of an old PSU running a car audio system. The main element is rated for 125W and I am pretty sure that other parts there aren't very efficent too, but still, I think it shouldn't really be using more than 100w.
650w. I missed my 1kw mark pretty badly, damn. I need to upgrade to consume more power, of course.
one of the benefits of coming in early for work is that you get to be the first one to find out that your work network is down and you can't do any work since all your VMs are in vSphere and all your code is on a network drive
at least i get downtime pay
I wish Firefox had tab-to-search so I could even consider using it, but looks like they're still dragging their heels on that.
782557
https://twitter.com/jonathanmayer/status/1044300922149588993
gg
unironically glad that i use apple maps
If I ever need an example to express the phrase "First world problems" I'll quote you on this.
Jesus.
Who said that's the only way to increase RT performance? I'm sure we'll keep finding better and better ways to early-fail entire chunks of scene, or ways to make rays more coherent. It's like raster graphics - while you can obviously speed things up by just throwing more cores and bandwidth at it, but we found plenty of other optimizations too.
Also, while transistors might not be shrinking much, there's still near-infinite room for improvement in cost per unit area. We've always seen mature processes get larger and larger dies, there's no reason that can't continue even further - and while that might not do much for single-core performance, anything parallel (like raster or raytraced graphics) can certainly benefit from more and more cores.
so I guess since everything is rgb now except for screws, manufactures are going for oled displays on everything
or at least, asus is, with that freaking 850 or 1200 watt platinum power supply with a oled display on the side, and now a few aio's with oled displays covering up the cooler
Diffuse surfaces/SubSurf and coherent rays seem contradictory. The more you keep your rays together the less photorealistic results you'd get or so I believe.
Larger dies produce more heat, that's unavoidable, and raytracing seems like something that would generate plenty of heat and draw a lot of power. Making the dies larger and/or stacking them seems like a tough problem because of that.
Man, I was happy when I got a phone that didn't take 30 seconds for the video to start or get stuck. Its nearly instant now, even over LTE. That still blows my fucking mind.
People like to bitch and whine about the small stuff but I still remember very vividly how much worse smartphones were years ago, Android in particular. The shit we put up with in the early 2010's would be completely unacceptable now.
In theory, you could group rays that are parallel or nearly so - starting from slightly different points, but pointed in the same direction. No effect on the end result if you have the same number of rays, but sorting this way would vastly improve cache hitrate. (The problem, of course, is figuring out how to sort this way, and how to generate all the rays before sorting)
Fixed-function hardware is much more efficient than a general-purpose compute core. So no, it actually isn't something that would run hot, relatively speaking.
Larger dies produce more heat if the entire die is in simultaneous use. A GPU that's running a ton of ray-tracing stuff is likely not doing as much in the actual shaders. This sort of design is already common in smartphone SoCs - have special, dedicated hardware on-chip that's only used occasionally, to do a special task more efficiently.
While stacking does have thermal scaling problems, simply making larger dies does not - heat flux is proportional to temperature delta times surface area. You'd need a bigger radiator in the end but they've been growing forever. I still have a GeForce 2 MX somewhere, with not even a heatsink on it. Then we got heat spreaders, then little fans, then GPUs became double-slot to have bigger fans, then we got blower designs, vapor chambers, triple-slot GPUs, closed-loop coolers... the entire history of graphics acceleration is one of always needing more power and cooling.
Well, got nothing else to comment on or to ask about. Thanks for the informative replies.
So apparently AMD is (thankfully) going back to GPU die code-names instead of the horrible current naming scheme.
Which is good, because we have "Vega 10", "Vega 11", "Vega 20"; and GFX8, and "RX Vega 56" and "RX Vega 64" and "RX Vega M GH Graphics 24", and "Radeon™ Vega 8 Graphics" , and "Radeon™ RX Vega 11 Graphics" (not to be confused with Vega 11).
I'm semi-fine with Linus eating some mains. Like, 80% OK with that.
Canada's on 120V so it'll just be a tickle, though.
They should stick to the RX (Generation)(Tier)(Revision) thing they've got going for consumer-facing product names, and stop doing stuff like Fury and Vega. Then for internal naming (Hawaii, Fiji, Polaris, Vega) go back to tried and true naming system of traditional GCN.
Pretty sure it's some battery project. I think he's using those Vruzend holders, last I saw they were in some small vehicle. Wonder if he's doing like a higher power toy car that kids drive.
I don't know, I think including the CU count right in the name is kind of brilliant because it tells you exactly how the cards stack up against each other performance-wise. IMO they should've done the same on the CPU side and used Ryzen 4/6/8 names, but I guess it pays more to copy Intel.
The only downside is that it might be confusing for less knowledgeable customers who don't know that for example "Navi" is newer and better than "Vega". And of course it's a mistake to have the codename be the same as the actual product name, that just creates a ton of confusion.
Couldn't see shit because of Twitter's auto-resolution bullshit, had some Internet problems at the time.
But even now when my connection is fine, source looks absolutely garbage.
Dumbo didn't have insulation around the cells themselves, will be interesting to see what dumpster fire they've cobbled up this time.
I bumped up SNR margin on my modem to the moon, data rates are fine, will be interesting to see if my connection starts fucking up around 6-7PM like it did yesterday.
It's always this time of year that this shit happens, I'm 99% sure that the phonelines buried underground have thermal expansion issues with the thermal cycling that happens because of hot day + cold night.
This is during the day, with the ridiculous SNR margin I'm forcing, and data rates are fine:
https://files.facepunch.com/forum/upload/109818/02762d0a-efdf-431c-ae44-4d2ed241da32/chrome_2018-09-27_16-17-31.png
You guys told me to fix this, but this was sort of like a scream for help.
Sorry, you need to Log In to post a reply to this thread.