General Linux Chat and Small Questions v. Year of the Linux Desktop!
4,886 replies, posted
Considered Gentoo/Funtoo? I'd try them if I had time
[QUOTE=Adam.GameDev;48712490]Considered Gentoo/Funtoo? I'd try them if I had time[/QUOTE]
Eh, I tried Gentoo once. I just really like CentOS/Fedora tbh.
Using Systemd instead of Cron:
[img]http://i.imgur.com/eASeyjd.png[/img]
Here's what list-timers then looks like with an active timer (forgot to include in img):
[img]http://i.imgur.com/ZleC6HQ.png[/img]
And here's how to do it for yourself:
Go to ~/.config/systemd/user
NAME.timer:
[code]
[Unit]
Description=Timer to do whatever
[Timer]
OnBootSec=5min
; how often to activate here... (use the calendar one for days of week etc...)
OnUnitActiveSec=1h
[Install]
WantedBy=timers.target
[/code]
NAME.service:
[code]
[Unit]
Description=Do whatever
[Service]
Type=oneshot
ExecStart=/cmd/to/execute
[Install]
WantedBy=NAME.timer
[/code]
Then reload the daemon's files in user-mode: systemctl --user daemon-reload
Enable the service and timer units: systemctl --user enable NAME.service NAME.timer
Start the timer: systemctl --user start NAME.timer
[editline]18th September 2015[/editline]
[QUOTE=~Kiwi~v2;48677283]It's wheezey last time I re-call.[/QUOTE]
Didn't they update it to Jessie recently?
Yeah right now they're on 8.1 Jessie vs I think SteamOS is on 7.9 Wheezy?
Last time I checked the Jessie version was still in beta
[QUOTE=Adam.GameDev;48714800]Last time I checked the Jessie version was still in beta[/QUOTE]
Jessie is listed as stable.
My Arch Linux install seems to have been broken by me installing multiple desktop managers. I installed GNOME while already having XFCE installed (I think, it was like 2 weeks ago and I haven't booted into Linux since). Now when I boot into Arch, it goes through all the bootstrapping stuff and eventually goes to:
[quote]Arch Linux 4.1.6-1-ARCH (tty1)
srobins-arch login:[/quote]
And just hangs there. The cursor blinks a few times for like 3 seconds before disappearing and then the system becomes unresponsive, the only thing that can rouse the thing from this screen is to hit the power button, and at that point it kicks back into normal and shows the shutting down log. If I wait long enough, the shut-down log mentions that Xorg was blocked for however long, so it seems clear that this has to do with my new DM. I don't understand why Xorg is freezing though, or what is really going on, or what I can do about it. Anyone have any ideas?
chroot in from a Linux live disc and disable your desktop managers
[QUOTE=lavacano;48716784]chroot in from a Linux live disc and disable your desktop managers[/QUOTE]
Thanks! Your advice helped me narrowly avoid ignorantly reinstalling Arch haha. Chrooted in and disabled LightDM which let me log in to shell, but Xorg still crashed when I launched it. After using nvidia-config to reconfigure my xorg.conf everything is working again.
[T]https://d.maxfile.ro/hqbxsccclc[/T]
Dear diary: my BSD machine apparently has "Defaults insults" by default in /etc/sudoers. That's going in every fresh GNU/Linux install now for the sake of it.
[editline]
$ sudo make me a sandwich
Password:
You do that again and see what happens...
[/editline]
In the end, I was doing that all wrong since it's `shutdown -p` and I'm not even in the wheel group or toggled for the targetpw default. [URL=http://xkcd.com/838/]The incident got reported[/URL].
[QUOTE=srobins;48717096]Thanks! Your advice helped me narrowly avoid ignorantly reinstalling Arch haha. Chrooted in and disabled LightDM which let me log in to shell, but Xorg still crashed when I launched it. After using nvidia-config to reconfigure my xorg.conf everything is working again.[/QUOTE]
Even if you already fixed it, there's another solution available. While booting, edit Arch's kernel parameters by simply adding "1" at the end, which will boot into single user mode (basically it starts you off as root), which allows you to administrate broken systems.
So there are plans to make a media server in the works in my household and I'm currently doing some trial runs in a VM to learn how I'm going to set it up. The idea right now is boot off an SSD and have four data drives in raid0+1 using mdadm on some debian based distro (Xubuntu 14.04 in the case of the VM).
So I raid0 (stripe) drives 1 and 2. Those check out, in testing I've successfully partitioned the array (using GPT with gdisk, since these are going to be 3TB drives), formatted it (to ext4), and mounted it to /raid where it stayed through reboots. But when I go through the same process with disks 3 and 4, and then try to raid1 (mirror) the two arrays (md0 and md1 into md2), partition and format the newly created array, and mount it, it works but fails to rebuild on the next boot for some reason.
[B]Basic questions are: [/B]
Do I need to partition and format each raid0 array before creating the raid1 array?
I'm pretty sure that I'm supposed to ultimately be partitioning, formatting, and mounting the end product of the raid1 array but is this actually the case?
Is there a better filesystem for raid arrays that I should be using besides ext4?
I have saved the raid config to mdadm.conf, but why does it seem to not actually save it, since it fails to rebuild on reboot?
[QUOTE=supervoltage;48723336]Even if you already fixed it, there's another solution available. While booting, edit Arch's kernel parameters by simply adding "1" at the end, which will boot into single user mode (basically it starts you off as root), which allows you to administrate broken systems.[/QUOTE]
How do I edit the kernel parameters?
Also, I came in here to ask a question about Linux and distribution of software: Why is the trend in Linux to download the source of a program and compile it yourself, rather than just downloading a pre-compiled portable executable from the author? I'm trying to wrap my head around the reasoning for this, I don't really understand why it's so common to not have any pre-compiled executables available for certain programs or services, but rather to have the source code and a makefile for you to compile it yourself. Is there something unique about Linux's use of linking and libraries that dictates this behavior?
[QUOTE=srobins;48728690]Also, I came in here to ask a question about Linux and distribution of software: Why is the trend in Linux to download the source of a program and compile it yourself, rather than just downloading a pre-compiled portable executable from the author? I'm trying to wrap my head around the reasoning for this, I don't really understand why it's so common to not have any pre-compiled executables available for certain programs or services, but rather to have the source code and a makefile for you to compile it yourself. Is there something unique about Linux's use of linking and libraries that dictates this behavior?[/QUOTE]
This really varies by distributions. A good many of anything usable will offer packages of pre-compiled binaries and included files for installation. Just not everything will have a package. But if you need headers to compile something, you can easily grab your development libraries using said package manager.
[QUOTE=srobins;48728690]How do I edit the kernel parameters?
Also, I came in here to ask a question about Linux and distribution of software: Why is the trend in Linux to download the source of a program and compile it yourself, rather than just downloading a pre-compiled portable executable from the author? I'm trying to wrap my head around the reasoning for this, I don't really understand why it's so common to not have any pre-compiled executables available for certain programs or services, but rather to have the source code and a makefile for you to compile it yourself. Is there something unique about Linux's use of linking and libraries that dictates this behavior?[/QUOTE]
The kernel parameters can be edited in GRUB, Syslinux or any other bootloader you might be using. For example, in GRUB, when the computer starts up it asks you to choose from a list of operating systems or a different kernel. What you do is you highlight the one you wish to boot and then press 'e' to edit that entry. Then you move down to the "linux" command, which is followed by the path to a kernel image and some parameters. At the very end of this line you write a "1"; it's a parameter. Hit Ctrl-x to boot into single user mode.
Another solution, from an already running system, is to edit the default configuration file of GRUB located in /etc/default. In the very first few lines, there you'll have your kernel's parameters, so just add a 1 there. After saving, execute "grub-mkconfig -o /boot/grub/grub.cfg" to save this change permanently.
Sadly, I haven't dabbled with EFI since my motherboard doesn't support it, so I can't give you any information regarding EFI bootloaders. Google is any Linux users' best friend.
Now for the source compilation of programs; programs compiled from source on your machine will work slightly better and faster. Every CPU is somewhat different, the compilation tailors the program exactly to your CPU.
[QUOTE=srobins;48728690]Why is the trend in Linux to download the source of a program and compile it yourself, rather than just downloading a pre-compiled portable executable from the author?[/QUOTE]
There are a lot of different reasons for this.
1. A lot of "Linux programs" are actually written for any Unix-like OS, and can be built for a variety of platforms other than Linux e.g. BSD, Solaris, OS X, even Windows (with MinGW). With so many target platforms it's impractical for the developer to also provide binaries. It's easier to let the users of each platform build it.
2. Even within one OS family, there are many differing distributions that add/remove different features. A good build script will test for the presence of certain features before compiling so that it can integrate better with the OS. One example off the top of my head is Vim, which has scripting APIs for Python and Ruby but support needs to be compiled in. If you want to distribute Vim as a binary, you need to make the call for all of the users whether it should support those languages (and thus depend on them when installing) or whether it should leave the support out. Compiling yourself lets you choose for yourself.
3. It is possible to compile a program on one computer and run it on another computer only because there are standard instruction sets that many CPUs share e.g. x86, ARM. However many processors also have special instructions that allow certain operations to be done more efficiently. The problem is that binaries that make use of these special instructions will be less portable since few computers will understand them. So when binaries are distributed, they usually leave the special instructions out and only use the standard common ones that most CPUs know. So they sacrifice efficiency for portability. Compiling from source lets you enable all of the features specific to your computer.
4. When you build a program, you get to choose whether it's linked statically or dynamically. Static linking integrates all of the code needed to run the program into one big binary so that you can always run that program on any computer (with the correct architecture), so it's more portable and is less likely to break with software upgrades. Static programs are larger since they need to include all of their library dependencies. If you have many statically linked programs all with the same dependency, you get a copy of that dependency for every single program. This wastes spaces and makes upgrades more painful since if the dependency gets a minor patch (like a bugfix), every single program that depends on it also needs to be upgraded since they each have their own copy. Dynamic linking lets every program share one copy of the library. This makes each program smaller and easier to upgrade, but the downside is that for major patches (where the library API changes) you either need to upgrade every program that depends on the library or have a system to determining which version of the library each program uses and managing them concurrently. Again this is a choice that needs to be made at compile time.
AFAIK most Linux distributions distribute dynamically-linked x86 binaries. But I recall Torvalds saying in an interview that he wished more Linux programs were distributed statically linked so things would "just work" more and you could spend less time worrying about broken packages.
[QUOTE=TrafficMan;48723423]So there are plans to make a media server in the works in my household and I'm currently doing some trial runs in a VM to learn how I'm going to set it up. The idea right now is boot off an SSD and have four data drives in raid0+1 using mdadm on some debian based distro (Xubuntu 14.04 in the case of the VM).
So I raid0 (stripe) drives 1 and 2. Those check out, in testing I've successfully partitioned the array (using GPT with gdisk, since these are going to be 3TB drives), formatted it (to ext4), and mounted it to /raid where it stayed through reboots. But when I go through the same process with disks 3 and 4, and then try to raid1 (mirror) the two arrays (md0 and md1 into md2), partition and format the newly created array, and mount it, it works but fails to rebuild on the next boot for some reason.
[B]Basic questions are: [/B]
Do I need to partition and format each raid0 array before creating the raid1 array?
I'm pretty sure that I'm supposed to ultimately be partitioning, formatting, and mounting the end product of the raid1 array but is this actually the case?
Is there a better filesystem for raid arrays that I should be using besides ext4?
I have saved the raid config to mdadm.conf, but why does it seem to not actually save it, since it fails to rebuild on reboot?[/QUOTE]
You can use btrfs's native raid support.
mkfs.btrfs has some parameters to make it work, and it's really easy with no need of mdadm.
I'm using it right now with two SSD on RAID0.
[code]mkfs.btrfs -d raid0 /dev/sdb /dev/sdc[/code]
[url]https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices[/url]
It doesn't create a new device like mdadm. it extends the existing specified devices.
f.e if both drives are 120GB.
checking the size of /dev/sdb or /dev/sdc will output 240GB.
so it's really easy, and no need to reconfigure it unless you format these drives.
[QUOTE=srobins;48728690]How do I edit the kernel parameters?
Also, I came in here to ask a question about Linux and distribution of software: Why is the trend in Linux to download the source of a program and compile it yourself, rather than just downloading a pre-compiled portable executable from the author?[/QUOTE]
That's not at all how software is distributed.
Most people download software as packages from the developers of the distribution they're using. These people build their packages against the source from the author (and in some cases, the author of the software actually maintains the package for some distributions).
[QUOTE=Larikang;48731393]There are a lot of different reasons for this.
1. A lot of "Linux programs" are actually written for any Unix-like OS, and can be built for a variety of platforms other than Linux e.g. BSD, Solaris, OS X, even Windows (with MinGW). With so many target platforms it's impractical for the developer to also provide binaries. It's easier to let the users of each platform build it.
2. Even within one OS family, there are many differing distributions that add/remove different features. A good build script will test for the presence of certain features before compiling so that it can integrate better with the OS. One example off the top of my head is Vim, which has scripting APIs for Python and Ruby but support needs to be compiled in. If you want to distribute Vim as a binary, you need to make the call for all of the users whether it should support those languages (and thus depend on them when installing) or whether it should leave the support out. Compiling yourself lets you choose for yourself.
3. It is possible to compile a program on one computer and run it on another computer only because there are standard instruction sets that many CPUs share e.g. x86, ARM. However many processors also have special instructions that allow certain operations to be done more efficiently. The problem is that binaries that make use of these special instructions will be less portable since few computers will understand them. So when binaries are distributed, they usually leave the special instructions out and only use the standard common ones that most CPUs know. So they sacrifice efficiency for portability. Compiling from source lets you enable all of the features specific to your computer.
4. When you build a program, you get to choose whether it's linked statically or dynamically. Static linking integrates all of the code needed to run the program into one big binary so that you can always run that program on any computer (with the correct architecture), so it's more portable and is less likely to break with software upgrades. Static programs are larger since they need to include all of their library dependencies. If you have many statically linked programs all with the same dependency, you get a copy of that dependency for every single program. This wastes spaces and makes upgrades more painful since if the dependency gets a minor patch (like a bugfix), every single program that depends on it also needs to be upgraded since they each have their own copy. Dynamic linking lets every program share one copy of the library. This makes each program smaller and easier to upgrade, but the downside is that for major patches (where the library API changes) you either need to upgrade every program that depends on the library or have a system to determining which version of the library each program uses and managing them concurrently. Again this is a choice that needs to be made at compile time.
AFAIK most Linux distributions distribute dynamically-linked x86 binaries. But I recall Torvalds saying in an interview that he wished more Linux programs were distributed statically linked so things would "just work" more and you could spend less time worrying about broken packages.[/QUOTE]
Thanks a ton for this post, super informative, ticked all the boxes of info I wanted to get and was really well written. Thanks a lot man.
[editline]22nd September 2015[/editline]
[QUOTE=supervoltage;48729814]The kernel parameters can be edited in GRUB, Syslinux or any other bootloader you might be using. For example, in GRUB, when the computer starts up it asks you to choose from a list of operating systems or a different kernel. What you do is you highlight the one you wish to boot and then press 'e' to edit that entry. Then you move down to the "linux" command, which is followed by the path to a kernel image and some parameters. At the very end of this line you write a "1"; it's a parameter. Hit Ctrl-x to boot into single user mode.
Another solution, from an already running system, is to edit the default configuration file of GRUB located in /etc/default. In the very first few lines, there you'll have your kernel's parameters, so just add a 1 there. After saving, execute "grub-mkconfig -o /boot/grub/grub.cfg" to save this change permanently.
Sadly, I haven't dabbled with EFI since my motherboard doesn't support it, so I can't give you any information regarding EFI bootloaders. Google is any Linux users' best friend.
Now for the source compilation of programs; programs compiled from source on your machine will work slightly better and faster. Every CPU is somewhat different, the compilation tailors the program exactly to your CPU.[/QUOTE]
Ahh okay I gotcha, thanks!
[QUOTE=srobins;48728690]How do I edit the kernel parameters?
Also, I came in here to ask a question about Linux and distribution of software: Why is the trend in Linux to download the source of a program and compile it yourself, rather than just downloading a pre-compiled portable executable from the author? I'm trying to wrap my head around the reasoning for this, I don't really understand why it's so common to not have any pre-compiled executables available for certain programs or services, but rather to have the source code and a makefile for you to compile it yourself. Is there something unique about Linux's use of linking and libraries that dictates this behavior?[/QUOTE]
it is the only way to verify that it does what it says it does
also, it is super useful for the collaborative community, being able to update and improve eachothers code, fork it or just learn from it
not really linux related but not sure where else to post: Why the fuck is msoffice so laggy?
[media]https://www.youtube.com/watch?v=ugcDn63WviA[/media]
[QUOTE=Mega1mpact;48744366]not really linux related but not sure where else to post: Why the fuck is msoffice so laggy?[/QUOTE]
Really hard to know, could be win10, Office, Graphic drivers.
i think this is their attempt to stop the screen tear by compositing each frame to vblank (osx has done this for years and years)
been trying out ncmpcpp and mopidy, really like it after messing around with it for a few minutes. though I'm having an issue with the visualizer, I can't get it to work. just says this when I start it
[IMG]http://pred.me/pics/1443203760.png[/IMG]
mpd.conf
[code]audio_output {
type "fifo"
name "my_fifo"
path "/tmp/mpd.fifo"
format "44100:16:2"
}
[/code]
.ncmpcpp/config
[code]visualizer_fifo_path = "/tmp/mpd.fifo"
visualizer_output_name = "my_fifo"
visualizer_sync_interval = "1"
visualizer_in_stereo = "yes"
visualizer_type = "spectrum"[/code]
am I missing something? what's wrong? I used what was on the Arch wiki
[editline]25th September 2015[/editline]
apparently I've been following advice for MPD, not mopidy. that kind of explains why stuff isn't working. can't seem to find anything regarding a visualizer for mopidy though, anyone know how?
Reinstalled arch, for some reason only gnome wayland seems to run. When I start gnome in X11 mode it only shows my cursor. Any clue how I can debug this?
[QUOTE=Mega1mpact;48761236]Reinstalled arch, for some reason only gnome wayland seems to run. When I start gnome in X11 mode it only shows my cursor. Any clue how I can debug this?[/QUOTE]
What xorg packages did you install would help.
[QUOTE=thatbooisaspy;48761364]What xorg packages did you install would help.[/QUOTE]
I just installed the gnome group using `pacman -S gnome`
Xorg log: [url]https://nnmm.nl/?FO3[/url] running on TTY3 with gnome-session already running on TTY2 using wayland and GDM running on TTY1
[editline]25th September 2015[/editline]
new log: [url]https://nnmm.nl/?kHi[/url]
[editline]25th September 2015[/editline]
fixed it :v:
yaourt -S xf86-video-{fbdev,vesa,ati}
any idea what might cause system wide freezes every 15 seconds and everytime I open a new tab, enter a new web page or open a program? only things that work when this happens is sound and my mouse. I can move my mouse but I can't interact with anything. typing will sort of get queued up so when it unfreezes, it'll type it all. the freezes last 5-10 seconds each. I read that it could be the lack of swap causing it, but even after creating a 2GB swap file it still happens. I've gone without swap for ages with no issues too. usually my system had to be on for about a whole day for this to start happening (and even then it was rare) but now a reboot isn't enough to get my system on its feet again. I've tried rebooting twice now and the freezes persists. any idea what's going on?
running Arch 64bit on 4.1.6-1
[editline]26th September 2015[/editline]
seems like it has something to do with my internet connection. when plugged into my 4G router, these freezes are very frequent. while plugged into my home router, the freezes are practically gone. when unplugged from anything, they're also gone. could a fluctuating internet connection cause freezes like I'm describing and how would I go about fixing it?
[editline]26th September 2015[/editline]
very odd, disabling IPv6 on my ethernet interface fixed the problem for me. why is that?
I've decided I finally really should learn systemd. Redoing my Fedora 21 install with 22...
Does anyone know why this is happening to the fonts in posts? (this only happens on FP).
I have infinality installed, but other sites are fine.
[t]http://josm.uk/i/damnfontrendering.png[/t]
Sorry, you need to Log In to post a reply to this thread.