• 2 Posts
Joined 4M ago
Cake day: Aug 04, 2023


I just meant with the C: comment that OP shouldn’t expect it to still be called C: after he’s wiped Windows and is running Linux.

As for the oversimplification comment:

First off, C: (or D:, E: etc) doesn’t refer to a disk in Windows. It refers to a partition. So it’s entirely possible (and not terribly uncommon) to have a single disk with both a C: and a D: on it.

It’s very typical for a Linux installation process to (by default, if you don’t tell it to do something else) make separate partitions on a single disk for / and /home. (Plus there’s usually an extra EFI boot partition on most modern desktop/laptop systems. And a swap partition.) In such a case, you couldn’t really describe where “the disk” (that was formerly called C: on Windows) was mounted in the mindset of conflating “partition” with “disk”. What was previously “the disk” C: (again, C: isn’t a disk, it’s a partition, but Windows makes it easy to conflate the two) is now split in two (or three or more) and mounted not just on / but also on /home (and maybe on /boot as well, and maybe one partition isn’t mounted on the main abstract root filesystem).

/ and /home aren’t really even partitions (let alone disks). They’re mount points in the slightly more abstract root filesystem.

The most obvious software representation on a typical Linux system of the main internal disk in that machine would probably be something like /dev/sda or /dev/nvme0. The partitions would likely be something like /dev/sda1//dev/sda2/etc or /dev/nvme0p1//dev/nvme0p2/etc. Also, the “filesystem” on the partition is arguably a subtlely a distinct concept from the block device that is the partition. And where that filesystem is mounted is yet another distinct concept. (Another subtlety I haven’t mentioned is the distinction between the device in the /dev/ directory/filesystem and the kernel representation of the device with the device major/minor numbers.)

A typical Windows install kindof conflates a lot of these probably a lot more so than Linux does. But I didn’t want to be like “akshuly things are a lot more complicated than that and you have to understand a bunch of Linux kernel internals to understand all the ways in which you’re wrong so you can install the holy ‘Guh-noo Plus Linux’.” All that is stuff that OP will learn by installing and using Linux. And if OP’s going with Mint, it’s probably not necessary to really understand all of that before starting the install process.

And technically OP doesn’t really need to understand that the main disk won’t be called C: after switching to Linux. Probably. (I don’t think I’ve ever installed Mint. So I don’t know for sure, but from what I’ve heard about it, I’d be surprised if the installation process had much of a learning curve.) But I told OP anyway. So there. :D

Hey! Great questions.

It seems like what you’re asking about are more what I’d think of as components of an a Linux “system” or “install.”

First off, it’s definitely worth saying that there aren’t a lot of rules that would apply to “all” Linux systems. Linux is huge in embedded systems, for instance, and it’s not terribly uncommon to find embedded Linux systems with no shells, no DE/WM, and no package manager. (I’m not 100% sure a filesystem is technically necessary. If it is, you can probably get away with something that’s… kinda sorta a filesystem. But I’ll get to that.)

Also, it’s very common to find “headless” systems without any graphical system whatsoever. Just text-mode. These are usually either servers that are intended to be interacted with over a network or embedded systems without screens. But there are a lot of them in the wild.

There’s also Linux From Scratch. You can decide for yourself whether it qualifies as a “distribution”, but it’s a way of running Linux on (typically) a PC (including things like DE’s) without a package manager.

All that I’d say are truly necessary for all Linux systems are 1) a bootloader, 2) a Linux kernel, 3) A PID 1 process which may or may not be an init system. (The “PID 1 process” is just the first process that is run by the Linux kernel after the kernel starts.)

The “bunch of default applications and daemons” feels like three or four different items to me:

  • Systemd is an example of an “init system.” There are several available. OpenRC, Runit, etc. It’s main job is to manage/supervise the daemons. Ensure they’re running when they’re supposed to be. (I’ll mention quickly here that Systemd has a lot more functionality built in than just for managing daemons and gets a bad rap for it. Network configuration, cron, dbus for communication between processes, etc. But it still probably qualifies as “an init system.” Just not just an init system.)
  • Daemons are programs that kindof run in the background and handle various things.
  • Coreutils are probably something I’d list separately from user applications. Coreutils are mostly for interacting with low-ish level things. Formatting filesystems. Basic shell commands. Things like that.
  • User applications are the programs that you run on demand and interact with. Terminal emulators, browsers compilers, things like that. (I’ll admit the line between coreutils and user applications might be a little fuzzy.)

As for your question about graphical systems, X11 and Wayland work a little differently. X11 is a graphical system that technically can be run without a desktop environment or window manager, but it’s pretty limited without one. The DE/WM runs as one or more separate processes communicating with X11 to add functionality like a taskbar, window decorations, the ability to have two or more separate windows and move them around and switch between them, etc. A Wayland “compositor” is generally the same process handling everything X11 would handle plus everything the DE/WM would handle. (Except for the Weston compositor that uses different “shells” for DE/WM kind of functionality.)

As far as things that might be missing from your list, I’ll mention the initrd/initramfs. Typically, the way things are done, when the Linux kernel is first loaded by the bootloader, it an “initial ramdisk” is also loaded. Basically, it creates a filesystem that lives only in ram and populates it from an archive file called an “initramfs”. (“initrd” is the older way to do the same thing.) Sometimes the initramfs is bundled into the same file as the kernel itself. But, that initial ramdisk provides an initial set of tools necessary to load the “main” root filesystem. The initramfs can also do some cool things like handling full disk encryption.

So, the whole list of typical components for a PC-installed Linux system to be interacted with directly as I’d personally construct it would be something like:

  • Bootloader
  • Linux Kernel
  • Initramfs
  • Filesystem(s)
  • Shell(s)
  • Init system
  • Daemons
  • Coreutils
  • Graphical system (X11 or Wayland potentially with a DE/WM.)
  • User applications
  • Package manager

But techinically, you could have a functional, working “Linux system” with just:

  • Bootloader
  • Linux Kernel
  • Either a nonvolatile filesystem or initrd/initramfs (and I’m not 100% sure this one is even strictly necessary)
  • A PID 1 process

Hopefully this all helps and answers your questions! Never stop learning. :D

For sure. I think there’s a happy medium for those who might not go for the Gentoo approach. (I’m a veteran of Gentoo as well, by the way. :) )

The extreme opposite of that is probably fearing to touch anything once the system is up and running. There are certainly Windows users like that. IT folks have one or two in their families who regularly try to rope them into doing free tech support. (“Sorry, Aunt Debbie, but I haven’t touched any version of Windows since XP. No, Aunt Debbie, I don’t ‘build computers’ for my job. That’s a different department. No, Aunt Debbie, I don’t know how to recover deleted emails in Hotmail. I’ve never used Hotmail.”) I wouldn’t want folks to fall into a habit of being afraid of their Linux system.

And of course, the Gentoo or LFS approach is way too far on the other end of the spectrum for some.

But I definitely wasn’t advocating that OP take the “break all the things and learn how to recompile your Kernel to enable debugging with GDB so you can figure out why such-and-such USB device isn’t working correctly.” (Unless of course OP wants to do that. In which case, knock yourself out, OP!)

I used OpenSUSE before Gentoo. I’m glad I did. It got me some basic bearings in the Linux ecosystem in a gentle way that didn’t make me want to give up and reinstall Windows. I switched to Gentoo basically when I started to realize how limiting relying on the OpenSUSE guis for installing and configuring things was. (I could tell there was a hidden layer of stuff going on behind those guis. And shying away from the deep lore was keeping me from doing things I could otherwise do.)

But even if I thought a particular person had a strong likelihood of taking the Gentoo approach at some point, I’d probably recommend something like Mint until they themselves wanted to dig deeper. And if that never happened, that’s fine too.

And, let’s be honest. There’s a chance that Mint could break as well even if OP isn’t doing reckless things solely for the sake of learning. (I’d say the same about Windows for that matter.) At that point, OP’s options are 1) figure out how to fix it and fix it or 2) wipe evrything and reinstall from scratch. Either way, something will have been learned in the process.

So, to OP, don’t feel pressured to do all the deep lore stuff unless/until you find yourself wanting to. But also you might be better off if you aren’t so scared to try to do things that you don’t try to customize your system for your needs in even very simple ways.

And again, good luck!

Welcome to the club!

Just want to mention that "C:" is a Windows-specific convention for identifying hard disks. Linux doesn’t have quite any concept of a “C:\ drive.” You’ll of course still have your OS installed on the same disk that today Windows calls "C:", but on Linux it’ll be (and I’m oversimplifying a little bit here) “/”.

I’m a little bit jealous because I can’t start learning such things for the first time like you will be soon. Ha!

My advice: don’t feel like you have to learn it all at once. Don’t feel bad about just accepting the defaults that the installer suggests where you don’t know what to do otherwise. If the command line intimidates you (we were all there once) use the gui tools exclusively as much as you like. Some day you might start to feel limited sticking with gui tools. (Or maybe for your particular purposes, the guis will always be perfect.) Until you do start to feel like you want to learn more about such things for your own sake, don’t let anyone tell you you’re doing it wrong by using the easy way.

(This from someone who does basically everything from the terminal. Lol!)

And don’t be too afraid to break things. Breaking things is arguably the best way to learn. And do feel free to reach out to friendly communities for help when you need it. It’s likely that if something has gone wrong (which is pretty unlikely with Mint), you’ll need to do some terminal stuff to fix it, but people can help you out with that if you ask. :)


All else being equal, less code and less dependencies is safer. The bigger the application and the more it tries to do, the larger its attack surface.

(Again, all else being equal. DWM is probably smaller than Weston, but Weston doesn’t let just any old process log keypresses or take screenshots, so probably at least arguable to say that Weston is (qualifier, handwave, condition, clarification) “safer.”)

Works great on my Raspberry Pi 4! (Most Docker images don’t support arm64.)

I’ve published a good handful of videos on YouTube. Mostly speedruns and glitch/exploit tutorials. (I’m not monetized or anything. Never really was trying either.)

I haven’t made any videos in quite a while, but I’m very much planning to only publish future videos only on PeerTube.

First off, let me say IANAL and what little I know about copyright law is pretty biased toward U.S. copyright law.

Second, the title of your post asks “should” it be legal and the body asks “is” it legal. And that’s two very different questions.

My personal opinion is that it should (as in “ought to” or as in “the world would be better off if it were to”) be legal. But whether or not it is legal is, well, not always straightforward.

Complicating the question, some PeerTube instances require that all local videos be under certain specific licenses. For instance, Diode Zone only allows videos that are licensed under Creative Commons licenses which allow making/saving copies and sharing those copies (at least for non-commercial purposes.)

However, saving a video locally creates a “copy.” And copyright covers the creation of “copies.” So I’d expect in the general case saving a local copy of a video from a PeerTube instance (or from YouTube or Vimeo or some such for that matter) would itself be infringement. Copyright law doesn’t have any special allowances for “personal use” as far as I know. (And if it does, it’s likely that allowance doesn’t apply to all sorts of works – just to one or two kinds of works. Like for instance the right to make a digital backup is allowed explicitly in U.S. copyright law for software but not for, for instance, audio CDs.)

If you can’t run docker-compose without sudo, there’s something wrong with your setup. The specifics would be specific to your distro, but most likely there’s a user group you could add your user to with sudo gpasswd -a user group to make the docker run and docker-compose commands work without sudo. (Might have to log out and back in as well to make it take effect if you’ve ran that command during the current session.) To find the name of the group, you’ll probably have to do some research about your distro in particular. On Arch (insert hate here ;) ), I think the docker group does that, and it’s not unlikely that the equivalent group for your distro has the same name.

The “magical s” (called the “SUID bit”) shouldn’t be required to be able to run docker run and/or docker-compose without sudo. Theoretically if you did want to do that, you could do it with sudo chmod u+s /usr/bin/docker. But again it’s probably better to just add yourself to the proper group (or otherwise take the correct steps for your distro.)

But also, running docker-compose (or the docker run command more directly) without sudo won’t necessarily make things inside the docker container run as your user. Making it do so is a little complex, actually, but I’ll go through it here.

So, most Docker images that you’d get from Docker Hub or whatever usually run by default as root. If you do something like docker run -v /path/to/some/directory/on/your/host:/dir -it python 'touch /dir/foo', even if you’ve got your groups set up to be able to run docker run without sudo, it’ll create a file on your host named “foo” owned by root. Why? Because inside the container, the touch /dir/foo command ran as root.

Honestly, I’d be thrilled if Docker had ways to tell it to be smarter about that kind of thing. Something that could make Docker create the file on the host owned by your user rather than root even if inside the container, the command that creates the file runs under the user in the Docker container that is root/uid 1.

But that’s not how it works. If root inside the container creates the file, the host sees it as owned by root, which makes things a little more of a pain. C’est la vie.

Now, this is a bit of an aside, but it helped me understand so I’ll go ahead and include it. It seems impossible that a command run by your user (assuming you’ve got your groups set up correctly) shouldn’t be able to create a file owned by root, right? If without sudo you try to chown root:root some_file.txt, it’ll tell you permission denied. And it’s not the chown command that’s denying you permission. It’s the Linux kernel telling the chown command that that’s not allowed. So how can it be that the docker run command can create files owned by root when docker run wasn’t run by root, but rather by a more restricted user?

Docker has a daemon (called dockerd) that by default runs all the time as root, waiting for the docker command to direct it to do something. The docker run command doesn’t actually run the container. It talks to the daemon which is running as root and tells the daemon to start a container. Since it’s the daemon actually running the container and the daemon is running as root, commands inside the container are able to create files owned by root even if the docker run command is run by your own user.

If you’re wondering, yes this is a security concern. Consider a command like docker run -it -v /etc:/dir/etc alpine vi /dir/etc/sensitive/file. That command, theoretically, could for instance allow a non-root user to change the host’s root password.

How do you get around that? Well, there are ways to go about running the Docker daemon as a non-root user that I haven’t really looked into.

Another concern is if, for instance, you’ve got a web service running as root inside a Docker container with a bind volume to the host and the web app has, for instance, a shell injection vulnerability wherein a user could cause a command to run as root inside the docker container which could affect sensitive files outside. To mitigate that issue, you could either not bind mount to the host filesystem at all or run the web service in the Docker container as a different user.

And there are several ways to go about running a process in Docker as a non-root user.

First, some Docker images will already be configured to ensure that what is run inside the container runs as non-root. (When making a Docker image, you specify that by having a USER directive in the Dockerfile.) Usually if things are done that way, the user will also be present in the relevent files in /etc in the image. But as I mentioned earlier, that’s usually not the case for images on Docker Hub.

Next, if you’re using docker-compose, there’s a “user” option for setting the user.

Another way to do this is with the -u argument on the docker run command. Something like docker run -u 1000 -it alpine /bin/sh will give you a shell process owned by the user with id 1000.

Another way is to create the user and su to that user as part of the command passed to docker run. I’ve been known sometimes to do things like:

docker run \
	-it \
	alpine \
	sh -c 'adduser tootsweet ; su tootsweet -c /bin/sh'

The only other thing I can think to mention. Sometimes you want not just to run something in a Docker container not as root but in fact to run it as a user id that matches the user id of a particular user on the host. For instance so that files written to a bind volume end up being owned by the desired user so we can work with the files on the host. I honestly haven’t found the best way to deal with that. Mostly I’ve been dealing with that situation with the last method above. The useradd command allows you to add a user with a specific user id. But that’s problematic if the needed uid is already taken by a user in the container. So, so far I’ve kindof just been lucky on that score.

Hopefully that all helps!

Edit: P.S. apparently the way lemmy.world is set up, you can’t mention certain standard *nix file paths such as / e t c / p a s s w d in posts. The post just isn’t accepted. The “reply” button grays out and the loading graphic spins forever with no error message and the post doesn’t get saved. I’m sure this is a misguided attempt at a security measure, but it definitely affects our ability to communicate about standard Linux kind of stuff.

I ran Artix for a while but went back to Arch. Maybe I missed something obvious, but it didn’t seem like there was a nice way even to pacman -Syu on Artix because there were so many packages that were in both the Arch repositories and the Artix repositories. And you couldn’t get away with only Artix repositories because there was so much they didn’t have that the Arch repositories did have.

I assumed it was just that Artix kindof wasn’t quite mature enough yet. But again, it’s entirely possible I missed something obvious. I might well be interested to give Artix another try if so.

It returns that while you have nano running? If so, maybe try ps aux (without the grep part) and just look through until you find “nano” listed. Just to make sure whether it’s running as root or your non-root user.

(And just to be clear, “my sudoer username” means the non-root user that you’re running nano as, right?)

Just a gut feeling, but it feels to me so far like this probably isn’t a hack or security thing. But of course, once the (no pun intended) root issue is found, that’ll provide more info.

There needs to be a Linux kernel fork that when you try to execute a directory executes all programs in the directory. In parallel. Juuuuuuuust to fuck with people who might accidentally execute the /usr/bin directory.

You’re not running nano in a docker container, are you? You’re running nano on a host Linux system, yeah?

Oh, and did you see the ps aux | grep nano one? (Sorry about that. I probably edited that into my post while you were working on a response.)

Yeah, tha’ts weird.

Maybe try alias nano and LC_ALL=C type nano. Those test whether you have an alias or function named “nano” in bash that might be being run instead of /usr/bin/nano.

Oh, also, whoami and id. Maybe there’s something weird with how you’re logged in and despite not having the username “root” you’re still uid 1 or something strange like that?

Oh! Also maybe while you’ve got nano running, do a ps aux | grep nano and see which user is reported to own that process.

If it makes you feel any better, I decided earlier today to experiment with “castnow”, a command-line program for casting to a Chromecast device.

I grabbed the url of a video off of Archive.org, used wget on a box I was ssh’d into to download the video, and then ran my “castnow” command to cast it to the Chromecast.

I got a progress bar and current/total time on the TV, but aside from that only a black screen and no audio.

I tried getting the latest version of “castnow” from the Git repo. I tried transcoding 7 different ways with FFMPEG. A bunch of things.

Finally, copied the video to my local machine and ran it in mpv.

The video itself was solid black with no audio and the Archive.org page had comments on it saying “why is there no video or audio?”

I tried a different video and it worked fine.

Try an ls -l $(which nano) and look at the permissions section of the output.

Most files only have hyphens, r’s, w’s, and x’s. (Like -rwxr-xr-x or some such.)

Particularly if there’s an “s” in the output (it’ll be in place of an “x”), that could explain what’s going on.

Basically, that “s” means “when a user runs me, run me as root even if the user running me isn’t root.” It’s useful on programs like “su” and “sudo” which let you run a command that (after authentication) do things as root.

But if that flag is set on nano, that’s pretty weird.

I’ll keep voting for the less evil option until I’m physically unable to vote.

Dilution of the term "Open Source?"
Is it just me or is passing off things that aren't FOSS as FOSS a much bigger thing lately than it was previously. Don't get me wrong. I remember Microsoft's "shared source" thing from back in the day. So I know it's not a new thing per se. But it still seems like it's suddenly a bigger problem than it was previously. LLaMa, the large language model, is billed by Meta as "Open Source", [but isn't](https://blog.opensource.org/metas-llama-2-license-is-not-open-source/). I just learned today about "Grayjay," a video streaming service client app created by Louis Rossmann. [Various aticles out there are billing it as "Open Source" or "FOSS"](https://appuals.com/rossman-gray-jay/). It's not. [Grayjay's license](https://gitlab.futo.org/videostreaming/grayjay/-/raw/master/LICENSE?ref_type=heads) doesn't allow commercial redistribution or derivative works. [Its source code is available](https://gitlab.futo.org/videostreaming/grayjay/-/tree/master?ref_type=heads) to the general public, but that's far from sufficient to qualify as "Open Source." (That article even claims "GrayJay is an open-source app, which means that users are free to alter it to meet their specific needs," but [Grayjay's license](https://gitlab.futo.org/videostreaming/grayjay/-/raw/master/LICENSE?ref_type=heads) grants no license to create modified versions at all.) FUTO, the parent project of Grayjay [pledges on its site](https://futo.org/what-is-futo/) that "All FUTO-funded projects are expected to be open-source or develop a plan to eventually become so." I *hope* that means that they'll be making Grayjay properly Open Source at some point. (Maybe once it's sufficiently mature/tested?) But I worry that they're just conflating "source available" and "Open Source." I've also seen some sentiment around that "whatever, doesn't matter if it doesn't match the OSI's definition of Open Source. Source available is just as good and OSI doesn't get a monopoly on the term 'Open Source' anyway and you're being pedantic for refusing to use the term 'Open Source' for this program that won't let you use it commercially or make modifications." It just makes me nervous. I don't want to see these terms muddied. If that ultimately happens and these terms end up not really being meaningful/helpful, maybe the next best thing is to only speak in terms of concrete license names. We all know the GPL, MIT, BSD, Apache, Mozilla, etc kind of licenses are unambiguously FOSS licenses in the strictest sense of the term. If a piece of software is under something that doesn't have a specific name, then the best we'd be able to do is just read it and see if it matches the OSI definition or Free Software definition. Until then, I guess I'll keep doing my best to tell folks when something's called FOSS that isn't FOSS. I'm not sure what else to do about this issue, really.

What are you boycotting right now and why? Are there any Boycotts you've ended?
This post is somewhat inspired by a recent post in this same community called "Is anyone else having trouble giving up Reddit due to content?" I imagine "Reddit" will be a common answer. (And it's one of my answers.) Another of my answers is "Hasbro." First Wizards of the Coast (a Hasbro subsidiary) tried to revoke an irrevokable license and screw over basically all 3rd-party publishers of D&D content, then they sent literal mercinaries to threaten one of their customers over an order mixup that wasn't even the customer's fault. D&D: Honor Among Thieves and the latest Transformers look really good, but those are within the scope of my boycott, so I won't be seeing those any time soon. Third, Microsoft. (Apple too, but then I've never bought any Apple devices in my life, so it hardly qualifies as a boycott.) Just because of their penchant for using devices I own against me in every way they can imagine. And for really predatory business practices. One boycott that I've ended was a boycott of Nintendo. I was pissed that they started marketing The Legend of Zelda: Breath of the Wild (though it didn't have a name at the time) before the WiiU came out, prompting me to be an early adopter of the WiiU, and then when they actually released BotW, they dual-released it on WiiU and Switch. I slightly eased my boycott when the unpatchable Fusee Gilee vulnerability for the first batch of Switches was discovered. I wanted to get one of the ones I could hack and run homebrew on before they came out with a model that lacked the vulnerability.