General Linux Chat and Small Questions v. Year of the Linux Desktop!
4,886 replies, posted
[QUOTE=killerteacup;50180065]Thanks for the link, just had a read through - So ZFS is for when you want a huge increase in data reliability at the cost of either a decrease or no noticeable change in performance? Does this do away with RAID? We run a lot of storage - 3 large arrays over a disc network and then smaller arrays in many of our servers and all of it is configured with RAID but disc failures are still a huge pain. Nevertheless it sounds like something I should suggest[/QUOTE]
ZFS can do software RAID, it even have equivalents of RAID5 and RAID6 without write-hole problem (RAID-Z1 and RAID-Z2), and a third version (RAID-Z3) that runs with three parity disks instead of one (RAID5) or two (RAID6).
The only real downside of ZFS, is Oracle being a massive disk as usual, since they bought out SUN who actually were pretty cool regarding others using their ZFS code, so it had to be branched.
And that you'd have to re-flash your RAID cards so they're just "dumb" S-ata / Serial-SCSI cards HBA cards.
[QUOTE=mastersrp;50180099]Yes and no. Perceived uptime is. Real uptime may not be important. The real important thing is keeping systems online, but it may not be the same systems. A proper infrastructure takes redundancy into account so that real uptime becomes irrelevant, and perceived uptime becomes the norm.
[editline]22nd April 2016[/editline]
I was talking more about the data than the systems though. A filesystem cannot keep your servers online.[/QUOTE]
Uptime is whatever you can market as a 5 9 availability, really. No matter how you get it.
[QUOTE=Levelog;50180112]Uptime is whatever you can market as a 5 9 availability, really. No matter how you get it.[/QUOTE]
Perhaps, but real uptime is harder than percieved uptime. Servers can die, filesystems can have issues and corrupt, RAID controllers can die, harddrives can die, and so on. Keeping the real uptime at 99.999% can be hard when fighting those issues. Setting up an infrastructure where those issues are not even a problem, even if they all happen all at once, is probably the easier way to keep your SLA promises.
[QUOTE=~Kiwi~v2;50180127]Don't make a promise you can't keep.
This goes out to everything.
Never ever promise 100% anything.[/QUOTE]
This goes without saying. For certain systems you could probably promise 100% uptime, but it's always ideal to keep it at certain digits of 99%, even if just to accomodate for the <100ms live migrations of a dead VM or 50 dead VMs.
You don't promise 100%, you promise 99.999% for a hefty price. Or 99.9999% if you believe in unicorns.
Obviously you wouldn't be doing live migration of a literally dead VM, but if the VM host is having issues, then a live migration with <100ms of downtime is probably something that most clients will never ever know about, so the 99.999% SLA will likely still be held up.
Damnit kiwi
[QUOTE=mastersrp;50180123]Perhaps, but real uptime is harder than percieved uptime. Servers can die, filesystems can have issues and corrupt, RAID controllers can die, harddrives can die, and so on. Keeping the real uptime at 99.999% can be hard when fighting those issues. Setting up an infrastructure where those issues are not even a problem, even if they all happen all at once, is probably the easier way to keep your SLA promises.[/QUOTE]
But its a balancing act between reliability and performance - the more reliable something is the slower it becomes because of all the checks and balances. So really storage centers should have systems in place to promise uptime most of the time and performance centers should promise performance of the time. The needs of the user dictates
[QUOTE=killerteacup;50180138]But its a balancing act between reliability and performance - the more reliable something is the slower it becomes because of all the checks and balances. So really storage centers should have systems in place to promise uptime most of the time and performance centers should promise performance of the time. The needs of the user dictates[/QUOTE]
Some performance might be lost, but it depends on your plan. You can get a LOT of reliability, or at least [b]perceived[/b] reliability without much if any performance loss. The important part is being able to keep something up, and sometimes you don't need to do any checks except "is anything failing right now".
You can, with almost no performance loss, promise a 99.99% uptime easily. With a proper infrastructure design, such as colocated and mirror setups per location, you can have virtual machines run with more than 99.9999% perceived uptime, even if the physical server is shutting down. You just migrate everything away, maybe even accross continents (with <10s downtime), and have it served from there. If you also *own* the IP addresses, then you might even do redundancy on those, so that if a datacenter is starting to fail completely, everything can be migrated away, IPs are relocated, and everything keeps running as if nothing ever happenend, even though a nuke just blew Paris away.
Doesn't ZFS' RAM usage go down if you disable deduplication?
[QUOTE=Adam.GameDev;50180193]Doesn't ZFS' RAM usage go down if you disable deduplication?[/QUOTE]
It does indeed, deduplication is what really eats through the RAM. disabling it is probably fine if you just want something to handle soft RAID. Then you can use borg for backups and have the backups be deduplicated even if the real data isn't.
[QUOTE=mastersrp;50180296]It does indeed, deduplication is what really eats through the RAM. disabling it is probably fine if you just want something to handle soft RAID. Then you can use borg for backups and have the backups be deduplicated even if the real data isn't.[/QUOTE]
IMO I'd rather deal with using a quality SSD as cache and using larger disks than using deduplication & related RAM usage.
I can't seem to install a .deb file on Ubuntu 16.04 which use to install on previous versions of Ubuntu.
I am trying to install Sublime Text Editor. Every time I launch the deb file and install it from Ubuntu Software (that's what they're calling the software center now), it just gets stuck on installing; it doesn't ask for my root password, it doesn't do anything.
I haven't tried installing it using the terminal yet, and I will probably do that now, but I am curious as to why the software center can't install it.
[QUOTE=Reflex F.N.;50180316]I can't seem to install a .deb file on Ubuntu 16.04 which use to install on previous versions of Ubuntu.
I am trying to install Sublime Text Editor. Every time I launch the deb file and install it from Ubuntu Software (that's what they're calling the software center now), it just gets stuck on installing; it doesn't ask for my root password, it doesn't do anything.
I haven't tried installing it using the terminal yet, and I will probably do that now, but I am curious as to why the software center can't install it.[/QUOTE]
[I]sudo dpkg -i '/path/to/package.deb'
[/I]Or if you want a GUI for handling *.deb packages:
[I]sudo apt-get install gdebi[/I]
But I'm personally not a fan of the software center, and prefer Synaptic instead from when installing/updating from a repository instead of just *.deb packages:
[I]sudo apt-get install synaptic[/I]
On the topic of deduplication, are there any deduplicating archive formats?
[QUOTE=Adam.GameDev;50180330]On the topic of deduplication, are there any deduplicating archive formats?[/QUOTE]
ZPAQ.
That, and borg. ZPAQ is great for archives, but borg is far superior in secure backups.
[QUOTE=Van-man;50180328][I]sudo dpkg -i '/path/to/package.deb'
[/I]Or if you want a GUI for handling *.deb packages:
[I]sudo apt-get install gdebi[/I]
But I'm personally not a fan of the software center, and prefer Synaptic instead from when installing/updating from a repository instead of just *.deb packages:
[I]sudo apt-get install synaptic[/I][/QUOTE]Oh, I know how to install it from the terminal, but I was just curious why the Software Center won't install it on Ubuntu 16.04.
[QUOTE=Reflex F.N.;50180369]Oh, I know how to install it from the terminal, but I was just curious why the Software Center won't install it on Ubuntu 16.04.[/QUOTE]
Software Center is a pile of shit, although DID they change to the one made by GNOME or did they abandon that plan in last minute?
I just know both their home-grown version and GNOME's stand-alone are horrible, seems like many people fawn over [URL="http://www.appgrid.org"]appgrid [/URL]instead
[QUOTE=Van-man;50180382]Software Center is a pile of shit, although DID they change to the one made by GNOME or did they abandon that plan in last minute?
I just know both their home-grown version and GNOME's stand-alone are horrible, seems like many people fawn over [URL="http://www.appgrid.org"]appgrid [/URL]instead[/QUOTE]It's not the same software center; it's a new one.
See the 9th point in this article:
[url]http://www.omgubuntu.co.uk/2016/04/10-things-to-do-after-installing-ubuntu-16-04-lts[/url]
[QUOTE]A new software store ships as part of Ubuntu 16.04 LTS.
Direct from the department of “Long Overdue Changes”, this all-new app store replaces the Ubuntu Software Center which has shipped in every Ubuntu release since Ubuntu 9.10![/QUOTE]
What is gvfsd-smb-brow and why is it ESTABLISHED when I enter netstat -natp? I googled it and it seems to have something to do with Samba, but I couldn't understand what exactly.
[QUOTE=Reflex F.N.;50182362]What is gvfsd-smb-brow and why is it ESTABLISHED when I enter netstat -natp? I googled it and it seems to have something to do with Samba, but I couldn't understand what exactly.[/QUOTE]
AFAIK it's a plug-in/module for Gnome based file-browsers, enabling them to connect to, and 'navigate' SBM/samba shares.
[QUOTE=Van-man;50182744]AFAIK it's a plug-in/module for Gnome based file-browsers, enabling them to connect to, and 'navigate' SBM/samba shares.[/QUOTE]Oh, all right. Thanks for your help! :smile:
Tried to upgrade my Ubuntu Server 14.04 to 16.04 LTS last night. Complained about errors and ended up not really working, resulting in a non-booting system.
For now, I've reverted to my backup. However, if anyone has any pointers on how to get this to work I'd be happy to hear about it (it's my first time doing an upgrade of this scale).
[QUOTE=Natrox;50186098]Tried to upgrade my Ubuntu Server 14.04 to 16.04 LTS last night. Complained about errors and ended up not really working, resulting in a non-booting system.
For now, I've reverted to my backup. However, if anyone has any pointers on how to get this to work I'd be happy to hear about it (it's my first time doing an upgrade of this scale).[/QUOTE]
A thing to understand about Ubuntu upgrades on servers between LTS releases is that unless you're doing VERY simple things, or nothing at all, you'll probably need to develop some sort of migration script to change existing configurations and so on. It's always ideal to avoid editing the configurations for this kind of scenario, but for the configurations that have been edited, changes will be made that go beyond a simple upgrade.
It's also VERY important to make sure you're using the latest software. That means
[code]apt-get update && apt-get upgrade -y && apt-get dist-upgrade -y[/code]
You might want to upgrade after the apt-get upgrade line, but it isn't always needed. Beyond that, it's mostly a matter of cloning your existing setups, running dist-upgrade and seeing how much shit breaks, then figure out how to fix it.
After a few days of this you might have a fully functional list of tasks to perform to distupgrade without issues. But it's never been smooth as far as I know.
[QUOTE=mastersrp;50186191]A thing to understand about Ubuntu upgrades on servers between LTS releases is that unless you're doing VERY simple things, or nothing at all, you'll probably need to develop some sort of migration script to change existing configurations and so on. It's always ideal to avoid editing the configurations for this kind of scenario, but for the configurations that have been edited, changes will be made that go beyond a simple upgrade.
It's also VERY important to make sure you're using the latest software. That means
[code]apt-get update && apt-get upgrade -y && apt-get dist-upgrade -y[/code]
You might want to upgrade after the apt-get upgrade line, but it isn't always needed. Beyond that, it's mostly a matter of cloning your existing setups, running dist-upgrade and seeing how much shit breaks, then figure out how to fix it.
After a few days of this you might have a fully functional list of tasks to perform to distupgrade without issues. But it's never been smooth as far as I know.[/QUOTE]
Thanks for the write-up. I feel like I was definitely up-to-date, and I did expect shit to break, albeit not this spectacularly :v:. I think with the amount of stuff I do on my server, I might as well clean install after migrating specific parts and configurations. Seems like a cleaner way to go about it.
I'm liking openSUSE tumbleweed. The YaST (a control panel, one of the defining features) is very featureful and pretty easy to use, and it's all very well integrated with KDE. Which makes sense because they contributed a lot to KDE. Only issue that it's quite bloated by default, with some 2k packages, but that's probably my fault for not looking through the install that much, and the categories in the YaST make it easy to remove a lot of that.
Also using KDE connect some more and it's actually quite nice. Rather than getting some other remote to control my media when I'm say, just laying in bed or something I can use my phone if the specific program supports it, or I can just use the phone as a mouse if it doesn't. The desktop notifications and shared clipboards are also pretty handy. If I want to text a link to somebody, I just ctrl-c it on my desktop, and then paste it into my messaging app on my phone.
In my experience, the openSUSE installer installs whatever the fuck it wants even if you uncheck it. Best way to get a non-bloated installation would be to make an appliance in SUSE Studio, but they don't provide Plasma 5 templates.
[editline]23rd April 2016[/editline]
Is there something like SUSE Studio but for Debian/Ubuntu that lets me choose the file system and can produce QEMU images?
[QUOTE=thelurker1234;50186570]I'm liking openSUSE tumbleweed. The YaST (a control panel, one of the defining features) is very featureful and pretty easy to use, and it's all very well integrated with KDE. Which makes sense because they contributed a lot to KDE. Only issue that it's quite bloated by default, with some 2k packages, but that's probably my fault for not looking through the install that much, and the categories in the YaST make it easy to remove a lot of that.
Also using KDE connect some more and it's actually quite nice. Rather than getting some other remote to control my media when I'm say, just laying in bed or something I can use my phone if the specific program supports it, or I can just use the phone as a mouse if it doesn't. The desktop notifications and shared clipboards are also pretty handy. If I want to text a link to somebody, I just ctrl-c it on my desktop, and then paste it into my messaging app on my phone.[/QUOTE]
SUSE would be my go-to binary distro, except in order to use proprietary technologies (including MP3 support, which I need, and the ability to run code I can't inspect the source code of for the most part), I'd have to add a third party repo, and the only one I can find that has up to date packages also tends to cause a lot of problems with my system for some reason.
If you're just planning on using FOSS software anyway, then SUSE is a great distro, really underrated.
For some reason I can't install Gdmap on Gentoo.
All the Layman repo's with it have ebuilds that fails compiling.
[QUOTE=Van-man;50192503]For some reason I can't install Gdmap on Gentoo.
All the Layman repo's with it have ebuilds that fails compiling.[/QUOTE]
You might try to download and compile it yourself, then fix a nice custom ebuild for yourself.
Sorry, you need to Log In to post a reply to this thread.