General Linux Chat and Small Questions v. I broke my Arch Install
6,886 replies, posted
Cool old term is also fun
[media]http://www.youtube.com/watch?v=5AHNMh-vF-M[/media]
[QUOTE=Adam.GameDev;46416779]And it's still horrible[/QUOTE]
It does what it wants to do well, or can you name anything objectively bad about it, rather than just subjective disagreements with its design philosophy?
[QUOTE=Chezhead;46417927]In case you haven't already, install the "oneko" package (sudo apt-get install oneko) and run it. Super fun, especially during lonly nights of programming alone in my dorm.[/QUOTE]
holy fuck I missed this little cat
I remember having an app on Windows that had this.
Can anyone recommend a guide to setting up the ubuntu desktop to be visible on my network and possibly act as a small home server for myself? One that I can remote into and use from work and stuff?
I'm wanting to set up a VPN on this, at some point get a dynamic dns provider, and remote into my machine from work so I can transfer documents/learn how remote access works.
All the while, still use this as my main linux machine where steam and other games are on it. As well as my PS2 emulator c: Basically, keep the ui but also double up as a server.
I've read security is lowered by keeping the desktop environment; can anyone also shed some light on what that means? Is it just because having the UI would be easier to do malicious things in if someone gained access to my comp? Or because of faults in the interface that could be taken advantage of?
[QUOTE=DrTaxi;46421067]It does what it wants to do well, or can you name anything objectively bad about it, rather than just subjective disagreements with its design philosophy?[/QUOTE]
I guess my problems were more to do with the standard applications being inconsistently designed and lacking a lot of functionality compared to the alternatives. Then again, maybe I am being harsh, it was made to be simple
[QUOTE=Adam.GameDev;46422274]I guess my problems were more to do with the standard applications being inconsistently designed[/quote]
Not really following you there...
[quote]and lacking a lot of functionality compared to the alternatives.[/QUOTE]
Which is intentional.
I don't like its design philosophy either, but that's personal preference.
[QUOTE=Adam.GameDev;46422274]I guess my problems were more to do with the standard applications being inconsistently designed and lacking a lot of functionality compared to the alternatives. Then again, maybe I am being harsh, it was made to be simple[/QUOTE]
Inconsistent GUI in Linux drives me crazy. I'm lucky that all my apps work well with the basic theme I'm using.
Also I did a Debian minimal install entirely with Aptitude today and I didn't get any weird bugs like i did with Apt-get! I'll make it my primary laptop distro, but I still don't know how I'm using less memory on Debian than Arch.
[QUOTE=Unreliable;46422006]Can anyone recommend a guide to setting up the ubuntu desktop to be visible on my network and possibly act as a small home server for myself? One that I can remote into and use from work and stuff?
I'm wanting to set up a VPN on this, at some point get a dynamic dns provider, and remote into my machine from work so I can transfer documents/learn how remote access works.
All the while, still use this as my main linux machine where steam and other games are on it. As well as my PS2 emulator c: Basically, keep the ui but also double up as a server.
I've read security is lowered by keeping the desktop environment; can anyone also shed some light on what that means? Is it just because having the UI would be easier to do malicious things in if someone gained access to my comp? Or because of faults in the interface that could be taken advantage of?[/QUOTE]
Well, there is multiple ways to do this which each has their own advantages. If you want to connect and control the machine via VPN then the easiest would be X-Forwarding over SSH. You could think of X-Forwarding over SSH as an all-in-one solution, where SSH provides the VPN, encryption and remote desktop/remote applications in one. One problem with X-Forwarding is that as far as I know this does not work entirely flawless on Windows (If this is an issue for you).
An other solution is VNC, which is the solution which is the mostly adapted and has clients available for more or less every device existing. There exists lots of more solutions than the two I mentioned as well, but those are the most popular.
For your security question then yes, each additional software adds code which might contain some kind of security hole. This may be even more relevant for desktop software, where security is not as thorough as with most server software. But again, every software you add degrades the security, it's all about the risk you are willing to take. However I would not worry about it. If you also have a firewall on the server/on your router then the risk is significantly lowered as well.
[editline]6th November 2014[/editline]
I see you mentioned file transfer now, and I would recommend using SSH for this aswell. If you are using Linux on the other end, you could use SSH as an solution for all remoting like:
SSHFS to mount your folders on the server locally on your client.
X-Forwarding to run remote GUI applications or your entire desktop environment.
SSH itself to run remote commands on the server.
You can also route sound, and if I'm not wrong you could even route devices like ex. USB over SSH
CUPS and GhostScript might be cool, but they are soooo not meant for performance.
48-page duplex print is killing my low-end laptop, I haven't seen a single page come out of the printer.
I let the print job run for 4 hours, gave up and printed it from my Android tablet, took 15 seconds.
[QUOTE=nikomo;46438554]I let the print job run for 4 hours, gave up and printed it from my Android tablet, took 15 seconds.[/QUOTE]
Something surely went wrong, neither CUPS nor GS should be that slow or performance intensive. On my netbook I can send a print job of 300 pages without it breaking a sweat, shouldn't be a problem for you either. Something has gone wrong. Are you using a Gentoo/Funtoo or other Source based distro?
Debian. I might have another look at it later.
Hey guys, I came here with multiple questions regarding an issue I'm facing.
I've recently acquired a Raspberry Pi B+ and I installed Arch Linux ARM on it, but I've noticed it's quite slow. So, lately I've been looking into cross-compiling the linux-ck kernel for the Pi. However, I'm not quite sure how I should go about the issue.
For one, I couldn't find a repository for a pre-compiled kernel for the armv6l/armv6kz architectures. Next, I couldn't find a ready-to-compile package for the armv6l/armv6kz architectures. So, what's left is to get the standard kernel for the Pi and patch with the -ck1 patches (or -ck2, depending on what is more performant).
Secondly, I've never done any kind of kernel patching, so all this is quite difficult for me. I would like it if I had a helping hand from you guys, seeing as most of you have more experience in kernel compilations than I do.
In conclusion, my request is as follows: please give me instructions on how to cross-compile a standard RPi kernel with the -ck patchsets on my machine, and please give me instructions as to how to implement it on the RPi. Thank you very much!
kernel patching is as easy as patching any other damn thing
[code]cd /usr/src/linux
patch -p1 < your_patch_file[/code]
I'm not so sure about cross compiling since every time I've cross compiled the chain was set up for me and I just had to type "make". If you've already compiled the standard RPi kernel yourself then the instructions are most likely the same as they were before you started patching things.
There's actually a [url=http://elinux.org/Raspberry_Pi_Kernel_Compilation#2._Cross_compiling_from_Linux]sort-of guide on this[/url] which even links to a cross-compiler.
As long as you can get the .config for the kernel, just patch the source and make with the desired target architecture.
[editline]8th November 2014[/editline]
Moreover, when you want to update the kernel, just grab the sources again, patch them, copy over the .config, run make oldconfig. The last step will update the .config interactively in case there's new settings that have been introduced. After that, just make again and you'll have your new kernel.
so is having to log out and reload the kernel module after every update exclusive to the proprietary nvidia drivers?
because i swear this should be automated and seamless by now, but if it doesn't happen with open-source drivers it would explain why the community hasn't done anything about it
[QUOTE=lavacano;46444487]so is having to log out and reload the kernel module after every update exclusive to the proprietary nvidia drivers?
because i swear this should be automated and seamless by now, but if it doesn't happen with open-source drivers it would explain why the community hasn't done anything about it[/QUOTE]
reloading the kernel module is not something i've had to do when updating my radeon, but logging out and back in is pretty standard. much better than restarting though.
[editline]9th November 2014[/editline]
oh wait, that's because there's no kernel module i'm loading
Need help with an init.d script
[code] #!/bin/sh
# This is for the file /etc/init.d/dnscrypt
### BEGIN INIT INFO
# Provides: dnscrypt
# Required-Start: $all
# Required-Stop: $all
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: DNSCrypt for OpenDNS
# Description: Launch the dnscrypt to communicate with OpenDNS
### END INIT INFO
DAEMON="/usr/sbin/dnscrypt-proxy"
NAME="dnscrypt"
dnscrypt_start()
{
echo "Starting dnscrypt"
dnscrypt-proxy --local-address=127.0.0.2:2053 --daemonize
}
dnscrypt_stop()
{
echo "Stopping dnscrypt"
start-stop-daemon --oknodo --stop --quiet --retry=0/3/KILL/3 --exec "$DAEMON" > /dev/null
}
case "$1" in
start)
dnscrypt_start
;;
stop)
dnscrypt_stop
;;
restart|force-reload)
dnscrypt_stop
dnscrypt_start
;;
*)
echo "Usage: /etc/init.d/$NAME {start|stop|restart|force-reload}" >&2
exit 1
;;
esac
exit 0
[/code]
Then I do this:
[code]
chmod +x /etc/init.d/dnscrypt
update-rc.d dnscrypt defaults[/code]
If I do sudo service dnscrypt start, the service starts normally. But it doesn't start up at boot
[QUOTE=lavacano;46444487]so is having to log out and reload the kernel module after every update exclusive to the proprietary nvidia drivers?
because i swear this should be automated and seamless by now, but if it doesn't happen with open-source drivers it would explain why the community hasn't done anything about it[/QUOTE]
You can't unload the module while it's in use, so yes, you've got to kill X first.
[editline]9th November 2014[/editline]
[QUOTE=Abaddon-ext4;46446148][code]
chmod +x /etc/init.d/dnscrypt
update-rc.d dnscrypt defaults[/code]
If I do sudo service dnscrypt start, the service starts normally. But it doesn't start up at boot[/QUOTE]
Is that supposed to be "defaults" or should it be "default"
It doesn't work if I use update-rc.d dnscrypt default
I love incremental backups/CoW <3
[code]% du -hd1 /mnt/storage
1.5T /mnt/storage/backup
11G /mnt/storage/hdd
21G /mnt/storage/VBox
15G /mnt/storage/steam
[B]1.6T[/B] .
% df -h /mnt/storage/
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 931G [B]197G[/B] 993G 17% /mnt/storage[/code]
I'm a total Linux noob
I just finished installing Ubuntu on a spare hard drive but I can't seem to get it to mount my Intel Matrix RAID 0. I'm a little confused because some say it should auto mount and others say you need to configure it with dmraid
[editline]10th November 2014[/editline]
Eh sounds like it dosen't support my intel raid chipset, guess I'll have to make due without
any idea what might be the issue of ssh timing out to my VPS? I know it works on my home connection, yet it doesn't where I currently am. obviously it must be related to the network here, but I am connected to my VPN so I wouldn't think that would be an issue?
Try putting this in your client-side .ssh/config: ServerAliveInterval 30
So I have a funky problem: I'm working on a assignment for university which requires us to ssh into a special vm. This vm, for whatever reason, does not detect my terminal size (or any resizing at all) and always defaults to 24x80. I *can* manually set the terminal size to whatever I want by sticking an export into my bashrc, but that's not very helpful as you can imagine.
After some digging I found out about something called sigwinch, which is a signal that's supposed to trigger and get sent to the child on every terminal resize. I can trap it and see that it works just fine on my local machine with [code]trap -- 'echo changed size' SIGWINCH[/code] but doing the same on the vm produces no results.
Anyone have any idea about WTF is going on here or how I can fix this? Googling results in virtually nothing useful.
[code]shopt -s checkwinsize[/code]
did you try putting that in ~/.bashrc
[editline]13th November 2014[/editline]
also check the system configs (/etc/profile, /etc/bash/bashrc, any files those two source) and see if SIGWINCH is getting trapped there for some stupid reason
[QUOTE=lavacano;46478360][code]shopt -s checkwinsize[/code]
did you try putting that in ~/.bashrc
[editline]13th November 2014[/editline]
also check the system configs (/etc/profile, /etc/bash/bashrc, any files those two source) and see if SIGWINCH is getting trapped there for some stupid reason[/QUOTE]
Yes that's already in my bashrc. I'll check around and see if I can find a trap.
[editline]13th November 2014[/editline]
Nope, it doesn't look like anythings trapping it. There's even no processes running in the background (the only ones that show up are bash and top).
I honestly have no idea, and since it looks like this isn't on my end, I'll go talk to the instructor to see if he can shed some light into this issue.
If anyone has any other ideas still, let me know.
anyone know how to fix screen tearing in firefox when scrolling?
install drivers
[QUOTE=Mega1mpact;46481530]install drivers[/QUOTE]
I've installed the latest nvidia driver.
[editline]edit[/editline]
installing compton seems to have fixed it.
Sorry, you need to Log In to post a reply to this thread.