CIPWTTKT&GC 0x30 - Racing to the thread post limit
999 replies, posted
writing them in a notebook and keeping it in your underwear drawer
1Password works really well with their subscription, but syncing between devices if you don't use specific cloud services is a pain otherwise
I've subscribed to LastPass for years. I like it.
I've been using keepass
but telling people who I do keepass would be bad opsec
uhh. I am pretty sure they notified all users and we got the "o shit we either got banned or something happened" password change screen. I remember because I changed my password from a 6 letter dumb word to a better one after that.
Nothing compares to going into a pubic git repo, jumping back the code to like the second ever commit, and finding a WORKING authentication token that has been removed from the repo by just commiting over it
My favourite ever was this. Keys cost $75 a month. But now I just rip them out of the build configs
https://github.com/klinker24/talon-twitter-holo/blob/e6d539392356cad5d6b47b8a568572596c86d524/src/main/java/com/klinker/android/twitter/APIKeys.java#L52
Not sure how that's relevant, Intel makes much bigger dies with up to 28 cores. Besides, server parts are usually clocked lower which improves the heat density situation.
And like I said, it might happen. Everything we know about Zen 2 so far are pure rumors.
Zen is super super unlikely to use more than 8 cores per die. 8 is pretty much a sweet-spot for yield and density, and current rumors point to them moving even more of the I/O to a "coordinator" type die, at least on the EPYC platform, which would allow them, really any number of core counts they wanted, but current speculation is 64 cores / socket.
Current design rumors:
https://i.loli.net/2018/09/25/5baa3f5c5d3c0.jpg
I think there's a chance AMD could go 6-core CCX (And I think it's still good idea), resulting in 12-core dice, giving them 48-core CPUs on the EPYC front using the same 4 die interconnect they currently use. You'll run into issues with memory feeding 12 cores on the desktop, nor do I think mainstream desktop needs 12 cores (8 is a very sweet spot for clock/thread balance).
Anyway, I think that MCM-based designs are super the future, but only for HEDT and HPC segments (on the CPU front, GPUs are a different matter) - AMD should strike a good balance where the single CCXs can hit very good clock frequencies (and good IPC) but scale down very well into lower-clock server uses (like Intel can).
AdoredTV has a great video talking about interposers and the future of MCM designs:
https://youtu.be/G3kGSbWFig4
Am I the only one who finds this "8+1 dies" rumor a bit farfetched? 7nm is already gonna bring a substantial density improvement over 14nm, and people are now suggesting that they'll also take the I/O off the die, which would decrease the size even further, resulting in dies that are like what, half the size of the current ones? Why couldn't they afford to just, you know, put more cores on one die with the improvement in density? Surely there has to be a power efficiency advantage to using fewer dies per package.
And like I said, I don't believe there's much of a need for mainstream consumer dies with >8 cores, the higher core count ones would make sense mostly for threadripper and epyc.
Defect rate on silicon is exponential with mm^2.
Which means you can't make bigger dies and remain economical (the 28 core dies from Intel are ridiculously hard to manufacture, which is why they cost so much).
That means the only option is to pack the cores closer together. Which means you run into the dark silicon problem.
The whole point of MCM is to avoid large dies because large dies suck ass at everything.
Yield, yield gets worse non-linearly as the die size goes up, especially a problem on new nodes. There isn't really a power efficiency boon for monolithic designs, uncore doesn't use much power for the most part, and in-fact beyond 8-cores you run into power density issues, and gate/core resistance lowest-common denominator situations (I.E, you have to shove the voltage the worst core needs to the CPU, and with more cores the chance of getting a worse core increases).
You also just run into the issue of feeding those cores, more cores means you need more power, and it also needs you need more memory bandwidth, something that can't be improved upon until DDR5, or moving to more memory channels (which would again kill the AM4 platform).
And because AMD will very likely use the same die down the stack, that means the EPYC 2 / Rome die is very likely to be the desktop die too.
Those are all reasons that MCM is a much better bet, especially on that last part with binning - just watch the video I posted and it covers most of that.
Miscellaneous info:
The CCXs are only ~41% of the whole die by mm²
Haveibeenpwned was a bit fucking late considering I do remember all sessions being cleared and forced password resets happened.
Also emails were only sent out if you had the setting turned on to receive emails from admins.
lateness only considers someone got hand of the data and scrubbed passwords and submitted the data to hibp.
We knew it happened years ago, but back then HIBP'd wouldn't of even had a list of emails, unless garry gave them data for the fuck of it
Is TSMC 7nm really supposed to be that bad? I mean they're already making Vega GPUs on it, and those are much larger dies...
I've also read some tests about the power consumption of the infinity fabric, they actually found out that just the uncore is responsible for a significant chunk of the total power consumption, which gets even worse with multi-chip packages.
Exactly, so if they move most of the IO off die and stay with 8 cores/die, that means the resulting dies will be absolutely tiny at 7nm. I don't see that there's a need for that right now. Plus there are more things that don't make sense to me about the separate IO die concept. First off, die to die latencies are already a problem with the current designs, and now the IO die creates another node in the path that all communication has to go through. Isn't that gonna increase the latencies even further? The only benefit I can see is that all dies would now have uniform access latencies to all memory. Yay?
Plus there's the whole compatibility with the existing platforms question. Would this radically new layout be compatible with the existing socket pinouts and chipsets? Seems like there could be problems making that work. Remember that the Zen 2 that's coming in 2019 is supposed to still be compatible with the old platforms. I can see them mixing things up in 2020+ when they switch over to a new platform.
I've watched the video that you linked, it's interesting stuff. It mostly talks about connecting dies with a network that's implemented on an interposer and how different topologies affect latencies. It actually makes a fair amount of sense. But these weird ass rumors going around are suggesting something completely different, and the video doesn't really dive into that or even attempt to give it any legitimacy.
Remember talking about images on twitter removing the Alpha value for thumbnails
Hey, just figured out how it works in photoshop.
https://tenryuu.blob.core.windows.net/astrid/2018/10/18-10-21_21-53-06.webm
How interesting
We don't have exact figures, this stuff is industry secret, but new nodes always have yield issues, and they're also super expensive (each new node is even more expensive than thre last).
It makes sense for AMD to optimize their design with this in mind, such as by making the I/O die a 14nm part.
The Vega is also part of the Radeon Mi lineup, of which the current Vega (Mi25) is ~$6,000USD.
If they're keeping the memory controller on die, then what's the point of making a whole separate die for the other I/O? I simply don't see the huge savings here. Furthermore, that means they'd have to make a separate version of the die with all the I/O on it for the consumer AM4 platform and another version for the server platform. I mean are we really going backwards towards the southbridge/northbridge split again?
This whole 8+1 die config smells of made up clickbait bullshit to me that's made just to cash in on the speculation bandwagon. Has there been any credible explanation for why it might be a thing given so far? If anything, I'd sooner expect it to be just 8 dies per package without the one extra. We already know that the IF link configuration and routing can be pretty flexible (see Threadripper 2 WX), it seems like that could be done.
DDR4 memory controller is actually fairly small, the spread out IFOP (infinity fabric) and PCIe SerDes are pretty big, as is the integrated SB, SATA, and USB. All of that (except a FAT IFOP to connect to the I/O die) could be cleaned up, and would reduce uneeded redundancy in the dice.
So far ∞F is point to point, so each die has to connect to each other die (meaning, look at the diagram and notice each Zeppelin die has 4 IFOP SerDes blocks, the same number of chips on an EPYC package. Going to 8 dice would mean a LOT of space used just for ∞F SerDes, plus a very complicated PCB design to route that.
AMD really only has a few options that make sense, Interposer, I/O die, and a higher CCX or higher core CCX design.
There's a decent amount of stuff they'd get to trim off the die. All of the PCIe, USB, Southbridge, and both IFIS SerDes.
Those would have to be replaced with something to connect to the I/O die, but there's still savings to be made there.
Shitty illustration:
https://files.facepunch.com/forum/upload/109818/9e6d251f-fc42-44a6-84cf-84f3d1e0ba73/amd_zen_octa-core_die_shot_(annotated).png
From what I've seen following Troy's blog it's not terribly unusual for data to only turn up years later. Often times it only even coming to light for the users when HIPB gets the data. Though as tratzzz pointed out, Garry did actually notify people of the issue. I remember getting an email about it myself despite not even being affected by it. I'm going to assume that whoever made that tweet the article quotes wasn't active on Facepunch and thus couldn't have possibly been affected by the breach due to the nature of it. I recall Garry making a post stating something along the lines of notifications only going out to accounts that had been active in the last 60 days or something like that.
If I had to guess, selling badly binned/otherwise cut down cards en masse to chinese internet cafes
Both AMD and Nvidia are releasing these cut-down cards, primarily in China, and it's really bullshit IMO.
Both companies need to knock it off, just make a new SKU and card.
They're not going to stop - it's free real estate.
Got a source on that? I don't think I've heard anyone claim that it has to be a direct connection from die to die. After a bit of searching I found this
The Infinity Scalable Data Fabric (SDF) is the data communication plane
of the Infinity Fabric. All data from and to the cores and to the other
peripherals (e.g. memory controller and I/O hub) are routed through the
SDF. A key feature of the coherent data fabric is that it's not limited
to a single die and can extend over multiple dies in an MCP
as well as multiple sockets over PCIe links (possibly even across
independent systems, although that's speculation). There's also no
constraint on the topology of the nodes connected over the fabric,
communication can be done directly node-to-node, island-hopping in a bus topology, or as a mesh topology system.
Then there's this article that goes more in-depth into all the interconnects used in Zeppelin dies and how the packages were designed: ISSCC 2018 (really interesting read btw)
Seems that quite a lot of care went into designing the pin-outs and the layout of the dies so that the routing between everything is as simple as possible. It'd be weird if they suddenly decided to completely change the package layout. I'm not saying it's impossible, but it may require more complex PCBs.
Another interesting thing explained in the article is that with two-socket systems, die to die communication sometimes has to go across two hops because each die is directly linked to only one die on the other socket. It's not a direct confirmation, but it suggests that more complex network layouts may be possible. Either way, once you go up to 8 cpu dies, multiple hops become necessary. In the situation that you proposed where there's one I/O die acting as an IF network switch, that means two hops between dies on the same socket, and 3 hops across sockets (assuming the I/O dies are directly linked to each other).
Anyway, the main thing that causes me to doubt this rumor is that AMD intentionally made the Zeppelin die larger and more complex than it needs to be in order to gain all that extra flexibility. The strategy seems to have worked out great for them so far. I don't see them reversing that decision and making a radical redesign all of a sudden.
I disagree. Cleaned out an old ESXi host last week, it never turned back on.
Are you sure upping the voltage would see a good overclock? Most information I've seen leads me to believe it has a small benefit but significantly reduces the life of the card.
I forgot how much I really didn't like Windows server.
default configuration:
"Hey we got updates, please install them"
"thanks for hitting that button, we will reboot when we fucking want, even though we are a server"
"your scheduled tasks won't start up from a boot via update btw"
thats the main reason why i switched to linux for hosting stuff at home
Sorry, you need to Log In to post a reply to this thread.