General Linux Chat and Small Questions v. Year of the Linux Desktop!
4,886 replies, posted
[QUOTE=FPtje;52388737]The problem is that simple updates carry such a big risk of things breaking in the first place.[/QUOTE]
That is highly debatable. It is extremely unlikely for a simple update to "break", at worst its going to run into an error and abort the transaction. Literally every single case I've seen of Arch "breaking" was due to user error. Usually because they ignored an error and forced an upgrade to occur anyway, or ignored pacnew files (which indicates config file changes), or were doing partial upgrades, etc.
Looking over announcements for the past few years, the only update that seems anywhere close to what you're talking about was the OpenVPN update from last December. But that would only be an issue for people who relied on OpenVPN for access to their Arch box, and they should be extra-attentive to OpenVPN updates in the first place (seeing as they rely on it).
You don't have to like Arch, but acting like every "pacman -Syyu" is a gamble is just ridiculous.
The fact that mistyping a single character in a single command can break your entire install isn't exactly ideal, though.
I appreciate Arch's absolute no-hand-holding policies, but I don't mind just a little hand holding for the purpose of not having to spend half an hour or more fixing my install after I inevitably make a typo.
It's not necessarily a case of good or bad; it's just a difference in ideologies. Some people don't want to use Arch because they don't want to be exposed to an extra risk of breaking things; that's an adequate enough reason to avoid it.
[QUOTE=Larikang;52392341]That is highly debatable. It is extremely unlikely for a simple update to "break", at worst its going to run into an error and abort the transaction. Literally every single case I've seen of Arch "breaking" was due to user error. Usually because they ignored an error and forced an upgrade to occur anyway, or ignored pacnew files (which indicates config file changes), or were doing partial upgrades, etc.
Looking over announcements for the past few years, the only update that seems anywhere close to what you're talking about was the OpenVPN update from last December. But that would only be an issue for people who relied on OpenVPN for access to their Arch box, and they should be extra-attentive to OpenVPN updates in the first place (seeing as they rely on it).
You don't have to like Arch, but acting like every "pacman -Syyu" is a gamble is just ridiculous.[/QUOTE]
As I said, my experience is from years ago, but I swear that "pacman - Syyu" really was a gamble. I've had mkinitcpio broken, so every kernel update, it would fail to create boot files. I've had a "pacman - Syyu" move everything from /lib to /usr/lib, breaking every god damn package in my system without a single warning. That's where I got yelled at to read the damn [url=https://www.archlinux.org/news/the-lib-directory-becomes-a-symlink/]blog[/url]. It wasn't Arch fault for not warning users in the upgrade process, no, it was every user's fault for not reading the blog.
Of course Arch is perfect if you can blame all systems broken after updates on the users. Seriously, you won't believe hoe often I was told that "[I]this update installed just fine on [u]my[/u] devices[/I], it must be something [u]you[/u] did".
Never did an update failure cause any "revert" of a transaction. Actually, I've never even heard of Arch reverting anything when an update fails. Does it do that nowadays? Because that would mean it's improved a lot.
[QUOTE=Lyokanthrope;52391279]That's what I adore about OpenSUSE Tumbleweed. Rolling release, but if something breaks horribly I can just boot into a btrfs snapshot taken before my [I]zupper dup[/I] and be good.[/QUOTE]
I wish the snapshot feature actually worked when I used it on Leap
[QUOTE=Dr. Evilcop;52391003]I switched to Manjaro for a similar reason; basically just Arch but updates get extra stability testing so things don't break as easily. Arch is also fun to install the first time but after the novelty of that wears off it's easier just to sit back and let the Manjaro installer do it.[/QUOTE]
What NixOS does there is [I]very[/I] different.
Every advanced Linux user should check it out; I can't promise that you'd like it (I don't personally use it either), but it's a very novel and interesting way of doing operating systems.
[QUOTE=DrTaxi;52403161]What NixOS does there is [I]very[/I] different.
Every advanced Linux user should check it out; I can't promise that you'd like it (I don't personally use it either), but it's a very novel and interesting way of doing operating systems.[/QUOTE]
Be my NixOS Facepunch buddy.
Also, NixOS aims to be as deterministic as possible. In my company, we have dozens of devices running NixOS. The definitions of these devices are in fully turing complete "Nix" files. Some of these files define services or software packages, others define hardware layout and drivers n shit, and even others define settings. All that stuff can be re-used, just like functions in a programming language can be re-used.
With a tool called "nixops" you can turn these configurations into reality. Press the "MAKE THIS CONFIG SO ON THAT DEVICE" button and it happens. It (re)starts all the services, it sets up the software properly, applies the settings and network configs and you're done.
I can't imagine writing my own install scripts and ssh'ing into a bunch of devices to get shit done anymore. Or deploying our own software by scp'ing it to a server and moving the files into place manually.
[QUOTE=FPtje;52404308]Be my NixOS Facepunch buddy.
Also, NixOS aims to be as deterministic as possible. In my company, we have dozens of devices running NixOS. The definitions of these devices are in fully turing complete "Nix" files. Some of these files define services or software packages, others define hardware layout and drivers n shit, and even others define settings. All that stuff can be re-used, just like functions in a programming language can be re-used.
With a tool called "nixops" you can turn these configurations into reality. Press the "MAKE THIS CONFIG SO ON THAT DEVICE" button and it happens. It (re)starts all the services, it sets up the software properly, applies the settings and network configs and you're done.
I can't imagine writing my own install scripts and ssh'ing into a bunch of devices to get shit done anymore. Or deploying our own software by scp'ing it to a server and moving the files into place manually.[/QUOTE]
NixOS is really cool. I used it for a while on my HTPC/server and it was awesome. The nixconfig is a wet dream coming true.
The only downside is that it takes some time to get used to the nix language, which makes it less flexible to use (at first).
Also the fact that there is no documentation (i think?) for how to install packages (ie setup samba, or x) is annoying at first. Instead you have to read the package implementation code on github.
Fedora has Kickstart which is pretty handy, and the syntax looks a bit easier compared to nixconfig. Doesn't let you perform anywhere near as much configuration as nixconfig though
Goddamn, Antergos is awesome.
Shoulda tried it awesome.
[editline]27th June 2017[/editline]
other than I keep trying to [i]apt install[/i].
[QUOTE=SataniX;52406077]other than I keep trying to [i]apt install[/i].[/QUOTE]
I kept trying to use yum when Fedora 22 was released
[QUOTE=Adam.GameDev;52406869]I kept trying to use yum when Fedora 22 was released[/QUOTE]
I still switch between so many CentOS servers and Fedora workstations that it gets me every time.
I applied for an apprenticeship where the company uses both CentOS and Debian so that's going to mess me up if I get it
Yeah that's annoying. Luckily at my old job I was the one spinning up all the servers, so I kept it constant. Job before that the only thing we ran debian on was unifi controllers so I didn't have to do much work on them past initial setup.
[editline]27th June 2017[/editline]
Since if you want to install it on CentOS that means you have to manually install mongo which proceeds to go "ONLY A 16GB DRIVE WHAT ARE YOU DOING?!?!"
I keep using yum on my Arch server and pacman on my CentOS server. If it weren't for the uptime stats I'd swap the other to Arch too.
[t]http://i.imgur.com/2bpw0Bo.jpg[/t]
Finally finished porting [URL="https://github.com/postmarketOS/pmbootstrap"]PostmarketOS[/URL] for my phone and got the basics running, pretty fucking neat. So now I have Wayland on my phone but am still running X on my desktop because Nvidia can't into drivers what is this?
[t]http://i.imgur.com/uWVd8U0.jpg[/t]
Turned off the dock, installled dmenu instead.
Anyone who can tell me why I get slight screen tearing and slight lag in Ubuntu GNOME with these specs?
[t]http://i.imgur.com/rEivmFb.png[/t]
And maybe why Kubuntu doesn't allow me to use it at all. (Black screen but able to enter into a terminal.) I'm pretty sure I've tried everything Google has to offer D:
[QUOTE=gokiyono;52417741]Anyone who can tell me why I get slight screen tearing and slight lag in Ubuntu GNOME with these specs?
[t]http://i.imgur.com/rEivmFb.png[/t]
And maybe why Kubuntu doesn't allow me to use it at all. (Black screen but able to enter into a terminal.) I'm pretty sure I've tried everything Google has to offer D:[/QUOTE]
Tearing is usually caused by broken v-sync, which in turn is caused by the GPU driver. Do you use open source (Nouveau) or closed Nvidia drivers? The closed Nvidia drivers usually cause tearing on KDE although Gnome works fine in my experience.
[QUOTE=drblah;52417777]Tearing is usually caused by broken v-sync, which in turn is caused by the GPU driver. Do you use open source (Nouveau) or closed Nvidia drivers? The closed Nvidia drivers usually cause tearing on KDE although Gnome works fine in my experience.[/QUOTE]
I've used the closed source ones. They worked perfectly with my 960 with KDE. But as soon as I tried with my 1060, it just black-screened.
I guess I should try the open source drivers
[editline]30th June 2017[/editline]
Thank god I don't have a printer to set up
Sooooo, enabling the nouveau drivers from this place
[t]https://i.stack.imgur.com/CV4K8.png[/t]
Causes the screen to go completely black with no way of entering the TTY
Why is your GPU being detected as a 9400 GT?
Maybe try [URL="https://askubuntu.com/questions/66328/how-do-i-install-the-latest-nvidia-drivers-from-the-run-file"]manually installing the correct proprietary drivers[/URL].
[QUOTE=Dr. Evilcop;52418788]Why is your GPU being detected as a 9400 GT?
Maybe try [URL="https://askubuntu.com/questions/66328/how-do-i-install-the-latest-nvidia-drivers-from-the-run-file"]manually installing the correct proprietary drivers[/URL].[/QUOTE]
It isn't, it's a picture I found online. But I might try manually installing them and see if that fixes it
[QUOTE=gokiyono;52420956]It isn't, it's a picture I found online. But I might try manually installing them and see if that fixes it[/QUOTE]
Give that a shot yea. IIRC Nouveau only very recently released GTX 10xx support, so maybe Ubuntu doesn't have it integrated yet.
been trying to get some wireless USB adapters working for Zorin 12.
nabbed a TP-LINK Archer T2UH AC600, native linux driver doesn't install.
try to run 'make' in the directory and errors pop up.
will post logs later. I've been told that I need build-essentials but the computer i'm trying to get the thing working on doesn't have internet yet.
also, there's no .inf file for Zorin's "Windows Wireless Drivers" app to use. just a Setup.exe for the windows drivers.
Seems like it's using this chipset
[url]https://wiki.debian.org/rt2870sta[/url]
The driver vendor has been included in staging since 2.6.29, but you should be able to use it without any problems.
There's a decent chance that whoever maintains this random "Zorin" distribution (that I've never heard anyone actually use), hasn't included the binary firmware.
Debian distributes it under firmware-ralink, find out how your distribution distributes proprietary firmware for hardware.
[QUOTE=nikomo;52431079]Seems like it's using this chipset
[url]https://wiki.debian.org/rt2870sta[/url]
The driver vendor has been included in staging since 2.6.29, but you should be able to use it without any problems.
There's a decent chance that whoever maintains this random "Zorin" distribution (that I've never heard anyone actually use), hasn't included the binary firmware.
Debian distributes it under firmware-ralink, find out how your distribution distributes proprietary firmware for hardware.[/QUOTE]
there's honestly a good chance that I'll give up with Zorin altogether, as in spite of its marketed user friendliness, it's really not that great for out-of-the-box stuff in spite of being based on Ubuntu.
I think I'm going to wipe it and just install Mint, which is what I use on my main computer anyway. I've had very little problems with that.
Dualboot Windows/Ubuntu on a 250 GB SSD, recommended? Or are there catches? I know it's a bit cramped, I was thinking of the following layout:
[CODE]
W10 (50GB) | root Ubuntu 16.04 (30GB) | home Linux (50GB) | Data partition (NTFS, 120GB)[/CODE]
W10 is mostly for only a few programs and Linux for day-to-day use.
Should I include swap? I know they're doing away with it in 18.04...
[QUOTE=Number-41;52434057]Dualboot Windows/Ubuntu on a 250 GB SSD, recommended? Or are there catches? I know it's a bit cramped, I was thinking of the following layout:
[CODE]
W10 (50GB) | root Ubuntu 16.04 (30GB) | home Linux (50GB) | Data partition (NTFS, 120GB)[/CODE]
W10 is mostly for only a few programs and Linux for day-to-day use.
Should I include swap? I know they're doing away with it in 18.04...[/QUOTE]
There shouldn't be many catches, other than you having to choose what OS you want to load at startup each time, because you'll be replacing the Windows loader with GRUB, which lets you choose which one you want to load. Just make sure the hardware is compatible - which, considering Linux, it should be compatible 99% of the time, especially with ubuntu. Are you installing on a Laptop or a Desktop?
In my installation, I put root on my SSD and then mounted home to a partition on my HDD, which has larger storage. I don't think this is the case with you though as you just listed the SSD.
Swap isn't much of a problem if you've got RAM to spare. If you've not got much, include a few GB or so. I included a couple of GB in spite of having 16GB, just to be sure.
I don't think Ubuntu root will need 30GB, you could probably cut it down to 20GB and get away with it, and then dedicate that extra 10 to the home if you wanted.
What's the other 120GB for? You could probably dedicate some of that to home if need be.
Is your system UEFI or BIOS? Remember to have the Ubuntu installer point the loader to where the Windows bootloader currently is, so you can swap between them. If it's UEFI, remember to boot the Live USB in the correct mode so it can see the EFI partition where the bootloader is. Otherwise, just look for a small partition (typically about 100mb) with a name similar to 'loader' if it has such a name. Check up on Google if you're not sure, or post the details here and we could probably have a look for you.
Good luck with it! Make sure the USB you're installing from has no bad sectors otherwise you'll get an input/output error when it's copying files over.
Yeah it's for my brother's laptop. My idea is to have two OS's that share a partition (the NTFS Data one) with e.g. music, movies, etc. Windows is icky with ext4 so I would format it as NTFS.
You could also just share your /home/ partition with W10 by formatting it as NTFS but I read that it's not optimal (as in, they advise against it). Hence the NTFS data partition that can be used by both.
[QUOTE=Number-41;52434198]Yeah it's for my brother's laptop. My idea is to have two OS's that share a partition (the NTFS Data one) with e.g. music, movies, etc. Windows is icky with ext4 so I would format it as NTFS.
You could also just share your /home/ partition with W10 by formatting it as NTFS but I read that it's not optimal (as in, they advise against it). Hence the NTFS data partition that can be used by both.[/QUOTE]
Yeah, sharing data between Linux and Windows is probably best done with NTFS. It is kind of the only FS with good support from both operating systems.
Also, setting /home/ to NTFS is probably not great since it is not case sensitive. It is likely to break something.
Sorry, you need to Log In to post a reply to this thread.