Labs: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14.

This and the following lab devoted to a mixture of maintenance tasks. It contains the bare minimum needed for management of your Linux machine. And it also contains few extras that might be useful for your shells scripts.

The things covered in these two labs require that you have your own Linux installation as they require superuser privileges. That will not work on the shared machines in IMPAKT.

Do not forget that the Before class reading is mandatory and there is a quiz that you are supposed to complete before coming to the labs.

Before class reading

Users and root

Although we touched this topic already few times, it is probably better to state it explicitly at this point.

Among user accounts on a Linux system, one user has special privileges. The user is called root (or superuser), has numerical id of 0 and has virtually unlimited power over the running machine. For example, access rights are actually ignored for root user (i.e., process running under root ignores any of the rw privileges and can read/write any file).

Unlike on other systems, Linux is designed in such way that end-user programs are always executed under normal users and never require root privileges. As a matter of fact, some programs (historically, this was a very common behaviour for IRC chat programs) would not even start under root.

root is needed for actions that modify the whole system. This includes system upgrade, formatting of a hard-drive or modification of system-wide configuration files.

The strict separation of normal (work) accounts and a superuser comes from the fact that Linux was designed as a multi-user system. The philosophy dates back 50+ years where system was shared by many users and only one of them – root – was the administrator of the machine. Today, when typical notebook installations contain only one account, the separation is often more artificial, but it still exists.

The truth is that contemporary users are threatened more by a malicious webpage rather than an unauthorized system software update. Superuser account was designed to prevent the latter rather than the former. However, the idea of separate user accounts is still valid today and a careful user may use different accounts for different activities (e.g., browsing social media vs. working with your bank account).

sudo

Some programs need privilege escalation, i.e., run with higher privileges and wider permissions than other programs. Some need this by design and we already mentioned the set-uid bit on executables that is used when the application always needs the elevated rights (and for any user actually launching the program). However, some commands require higher privileges only once in a while, so running them as set-uid broadens the possible attack vectors unnecessarily.

For these situations, one option is sudo (homepage). As the name suggests, it executes (does) one command with superuser privileges. The advantage of sudo is that system admin can specify who can run which command with elevated permissions. Thus it does not give the allowed user unlimited power over the machine, but only over a selected subset of commands. For example, it is possible to give a user option to restart a specific service (e.g., we want to allow a tester to restart a web server) without giving him control over the whole machine.

Note that the granularity of sudo stops at the level of programs. It does not restrict what sudo does inside the program. For example, it is possible to impose a restriction, that alice can execute dangerous_command only with --safe-option. However, if the dangerous_command reads options also from a ~/.dangerousrc, alice can provide --unsafe-option there and sudo cannot prevent that. In other words, once the initial check is completed, the program runs as if it was launched under root.

This is extremely important for shared machines where the administrator typically wants to restrict all users as much as possible. On the other hand, for desktop installations, the typical default is that the first user created (usually during installation) can sudo anything. The reasoning is that it is the only (physical) user, who knows the root password anyway. This is why most tutorials on web usually provide the commands for system maintenance including the sudo prefix.

However, you should always understand why you need to run sudo. Never get into the habit if it does not work, let’s try prepending sudo. Also note that there are multiple options for gaining a root shell (i.e., sudo bash).

Note that sudo is not the only security mechanism present. We will not discuss other mechanisms in great detail, but to give you pointers to documentation: there is also SELinux or AppArmor and a high-level overview on this Wikipedia page.

Package management

Software in Linux is usually installed by means of a package manager. The package manager is a special program that takes care of installation, upgrading, and removing packages. A package can be anything that could be installed; this includes:

  • a program (for example, package ranger installs the program ranger),
  • data files or configuration (e.g., libreoffice-langpack-cs for Czech support inside LibreOffice),
  • a library (e.g., gmp or gmp-devel providing the GNU arbitrary-precision arithmetics library),
  • or a meta package (e.g., xfce that covers xfce4-terminal, xfwm4-themes etc.).

In this sense, Linux is very similar to what you know from the shopping-center-style management of applications on your smartphones. It is very irregular to install software on Linux using a graphical install wizard.

The advantage of using centralized package management is the ability to upgrade the whole system at once without the need to check updates of individual applications.

Individual packages often have dependencies – installing one package results in transitive installation of other packages the first one depends on (for example, a web browser will require basic graphical support etc.). It makes the upgrading process a bit more complicated (for the package manager, not for the user, though). But it can save some disk space. And the most important advantage is that different application share the same libraries (on Linux, they have .so extension and are somewhat similar to DLLs on Windows) and it is possible to upgrade a library even for an otherwise abandoned application. That is essential when patching security vulnerabilities.

Note that it is possible to install software manually too. From the file-system point of view, there is no difference – the package manager also just copies files to the right directories. However, manually installed software has to be upgraded manually too and generally complicates the setup. So avoid it when possible.

A typical package manager works with several software repositories. You can think about it as if your cell-phone has multiple marketplaces where to choose applications from. Typically, you will encounter the following types of repositories. It is up to each user (administrator) to decide which to use.

  • Stable and testing, where the latter provides newer versions of a software with slight possibility of bugs (usually, there is a third repository, often called unstable, for bleeding-edge software).
  • Free and non-free, where the former contains only software without any legal surprises. Non-free software can be encumbered by patent or royalty issues (usually based on US law), or by a license which restricts use or redistribution.

It is also possible to set up your own repository. This can be useful if you want to distribute your software to multiple machines (and you cannot publish the packages in the normal repositories because it is, for example, proprietary).

Most distributions also offer some kind of user-repository support where virtually anyone can publish their software. For Fedora, this is done via Copr.

Note that both official and unofficial repositories offer no guarantees in the legal sense. However, using the official repositories of a given distribution is considered safe, the amount of attacks on software repositories is low and – unlike with many commercial organizations – distribution maintainers are very open in informing about security incidents. It is probably much easier to encounter a malicious application in your smartphone marketplace than to encounter it in an official repository of a Linux distribution.

Alternatives to classic package managers

The existence of various package managers has its disadvantages – when using multiple distributions, the user has to know how to operate different package managers. Furthermore, different distributions need to create different packages (compatible with their package managers), which results in more work.

Therefore, an effort has been made to unite the package managers. Snap was created in order to install packages among distributions uniformly. While for some users it is a way to get the software they want as quickly as possible, for some the proprietary nature of Snap and a need for an account at the package store presents potential dangers and shift in Linux open-source ideology.

To demonstrate a problematic example, let’s attempt to install PyCharm. PyCharm is an IDE for Python, which is (unfortunately) mostly directed at Windows users and also offers a paid professional version. No PyCharm package is offered in Fedora.

This is rather an exception – you won’t encounter problems with most open-source software. Actually, even companies that were traditionally oriented towards different OSes offer DNF-based repositories for their products these days. Note that in this case, providing a full repository is the ideal choice. Users can choose whether to enable this repository or not, distribution maintainers can focus on other tools and the company keeps full control over the release cycle and the distribution process.

There are these two options to install PyCharm:

  1. Use Snap
  2. Use the ad-hoc installation script. It is downloaded with the PyCharm installation.

Note that the second option is usually frowned-at in general. It requires running a shell script that the user downloads which is generally considered dangerous – you should always examine such scripts. (Obviously, using a package manager also involves downloading and running scripts but the attack surface is a bit smaller.)

Another issue is that any application downloaded in this way will not be automatically updated.

Which one to use

Snap is not the only alternative to the classic package managers.

Among others, there is Flatpak or AppImage. They can co-exist and it is up to the user to decide which one to choose.

The decision which one to use is influenced by many factors. Generally, using pre-packaged software distributed with your system (distribution) should be preferred.

As a last note – even if the software you want to install does not provide packages for your distribution, you can always create them yourself. The process is out-of-scope for this course but it is actually not very difficult.

Process management and signals

When you start a program (i.e., an executable file), it becomes a process. The executable file and a running process share the code - it is the same in both. However, process also contains the stack (e.g., for local variables), heap, list of opened files etc. etc. – all this is usually considered a context of the process. Often, the phrases running program and process are used interchangeably.

To view the list of running processes on your machine, you can execute ps -e (or ps -axufw for a more detailed list). However, for interactive inspection, htop is a much nicer alternative.

We can use htop to view basic properties of processes. For illustration here, this is an example of ps output (with --forest option use to depict who launched each process).

UID          PID    PPID  C STIME TTY          TIME CMD
root           2       0  0 Feb22 ?        00:00:00 [kthreadd]
root           3       2  0 Feb22 ?        00:00:00  \_ [rcu_gp]
root           4       2  0 Feb22 ?        00:00:00  \_ [rcu_par_gp]
root           6       2  0 Feb22 ?        00:00:00  \_ [kworker/0:0H-events_highpri]
root           8       2  0 Feb22 ?        00:00:00  \_ [mm_percpu_wq]
root          10       2  0 Feb22 ?        00:00:00  \_ [rcu_tasks_kthre]
root          11       2  0 Feb22 ?        00:00:00  \_ [rcu_tasks_rude_]
root           1       0  0 Feb22 ?        00:00:09 /sbin/init
root         275       1  0 Feb22 ?        00:00:16 /usr/lib/systemd/systemd-journald
root         289       1  0 Feb22 ?        00:00:02 /usr/lib/systemd/systemd-udevd
root         558       1  0 Feb22 ?        00:00:00 /usr/bin/xdm -nodaemon -config /etc/X11/...
root         561     558 10 Feb22 tty2     22:42:35  \_ /usr/lib/Xorg :0 -nolisten tcp -auth /var/lib/xdm/...
root         597     558  0 Feb22 ?        00:00:00  \_ -:0
intro        621     597  0 Feb22 ?        00:00:40      \_ xfce4-session
intro        830     621  0 Feb22 ?        00:05:54          \_ xfce4-panel --display :0.0 --sm-client-id ...
intro       1870     830  4 Feb22 ?        09:32:37              \_ /usr/lib/firefox/firefox
intro       1966    1870  0 Feb22 ?        00:00:01              |   \_ /usr/lib/firefox/firefox -contentproc ...
intro       4432     830  0 Feb22 ?        01:14:50              \_ xfce4-terminal
intro       4458    4432  0 Feb22 pts/0    00:00:11                  \_ bash
intro     648552    4458  0 09:54 pts/0    00:00:00                  |   \_ ps -ef --forest
intro      15655    4432  0 Feb22 pts/4    00:00:00                  \_ bash
intro     639421  549293  0 Mar02 pts/8    00:02:00                      \_ man ps
...

First of all, each process has a process ID, often just PID (but not this one). The PID is a number assigned by the kernel and used by many utilities for process management. PID 1 is used by the first process in the system, which is always running. (PID 0 is reserved as a special value – see fork(2) if you are interested in details.) Other processes are assigned their PIDs incrementally (more or less) and PIDs are eventually reused.

Note that all this information is actually available in /proc/PID/.

Signals

Linux systems use the concept of signals to communicate asynchronously with a running program (process). The word asynchronously means that the signal can be sent (and delivered) to the process regardless of its state. Compared to communication via standard input (for example), where the program controls when it will read from it.

However, signals do not provide a very rich communication channel: the only information available (apart from the fact that the signal was sent) is the signal number. The signal numbers are defined by the kernel, which also handles some signals by itself. Otherwise, signals can be received by the application and acted upon. If the application does not handle the signal, it is processed in the default way. For some signals, the default is terminating the application; other signals are ignored by default.

This is actually expressed in the fact that the utility used to send signals is called kill (because usually the target process terminates).

By default, the kill utility sends signal 15 (also called TERM) that instructs the application to terminate. An application may decide to catch this signal, flush its data to the disk etc., and then terminate. But it can do virtually anything and it may even ignore the signal completely. Apart from TERM, we can instruct kill to send the KILL signal (number 9) which is handled by kernel itself. It immediately and forcefully terminates the application (even if the application decides to mask or handle the signal, the request is ignored).

The most of the signals are sent to the process in reaction to a specific event. For example, the signal PIPE is sent when a process tries to write to a pipe, whose reading end was already closed. (Remember the issue from lab 04.) Terminating a program by pressing Ctrl-C in the terminal actually sends the INT (interrupt) signal to it. If you are curious about the other signals, see signal(7).

For example, when the system is shutting down, it sends TERM to all its processes. This gives them a chance to terminate cleanly. Processes which are still alive after some time are killed forcefully with KILL.

We will see how to react to signals during the labs.

sudo

To try sudo, you can try running fdisk -l. Generally, fdisk is a tool for partitioning disks. With -l, it reads information about all disks on your system and displays information about partitions on them.

Without sudo, it will likely show only the following message:

fdisk: cannot open /dev/sda: Permission denied

Running it with sudo displays the actual information.

sudo fdisk -l
Disk /dev/sda: 480 GiB, 515396075520 bytes, 1006632960 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xdc505942

Device     Boot    Start        End   Sectors   Size Id Type
/dev/sda1  *        2048    2099199   2097152     1G 83 Linux
/dev/sda2        2099200   18620415  16521216   7.9G 82 Linux swap / Solaris
/dev/sda3       18620416 1006632959 988012544 471.1G 83 Linux

Note that sudo typically asks for a password (though that can be configured). It is the password of the current user, not root’s. (If you want to authenticate using the root’s password, you can use su instead.)

dnf (a.k.a. package manager in Fedora)

Note: Fedora used to have yum as the package manager and it can be found in many tutorials on the Internet (even in quite recent ones). It is considered obsolete and you should better avoid it. If you are used to yum from older versions of Fedora or from other RPM-based distributions, you will find dnf very similar and in many situations faster than yum.

The package manager for Fedora is called DNF.

Note: if you decided to use a different distribution, you will need to update the commands to match your system. Generally, the operations would be rather similar but we cannot provide a tutorial for every package manager here.

You can use the search command to get a list of packages which match the given name. Note that searching is not a privileged operation, hence it does not require sudo.

dnf search arduino
dnf search atool

Note that searching for a very generic term can yield hundreds of results.

The output is in the following format:

atool.noarch : A perl script for managing file archives of various types
ratools.x86_64

The .noarch and .x86_64 strings describe the nature of the package. noarch usually refers to a data package or package using interpreted languages, while .x86_64 denotes a package with binaries for the x86-64 architecture (e.g., written in C or Rust and then compiled to machine code).

To install a software package, run dnf with the install subcommand, giving it the name of the package to install. Here, sudo is needed as we are modifying the system.

sudo dnf install atool

Some applications are not a part of any software repository, but you can still download them in a format understandable by your package manager. That is a better situation than installing the files manually, because your package manager knows about the files (although it cannot upgrade it automatically). One such example is the Zoom client which has to be installed like this:

sudo dnf install "https://zoom.us/client/latest/zoom_x86_64.rpm"

To upgrade the whole system, simply run sudo dnf upgrade. DNF will ask for confirmation and then upgrade all available packages.

Note that unlike on other systems, you can always choose when to upgrade. The system will never reboot the machine for you or display a message about needed restart, unless you explicitly ask for it.

If you want to install a whole group of packages, you can use dnf grouplist to view their list and sudo dnf install @GROUP_NAME to install it.

The commands above contain the basics for maintaining your Fedora installation with respect to package management. The following links provide more information. The official Wiki page is a good source of information if you already know the system a bit.

For beginners, this guide about DNF and this tutorial are probably a better starting point.

Working with processes

Execute ps -ef --forest to view all process on your machine. Because of your graphical interface, the list will be probably quite long.

Practically, a small server offering web pages, calendar and SSH access can have about 80 processes, for a desktop running Xfce with browser and few other applications, the number will rise to almost 300 (this really depends a lot on the configuration but it is a ballpark estimate). About 50–60 of these are actually internal kernel threads. In other words, a web/calendar server needs about 20 “real” processes, a desktop about 200 of them :-).

You can view the same information with htop. You can also easily configure it to display information about your system like amount of free memory or CPU usage.

It can look like this (this one from a 32G/24CPU machine):

  Date & Time: 2022-03-03 10:07:26                                   Uptime: 50 days, 20:55:31
    1[||                     4.5%]    7[|||                    5.0%]  13[||||                    8.9%]  19[|                       3.2%]
    2[|                      1.3%]    8[|                      1.9%]  14[||                      3.8%]  20[||                      4.5%]
    3[||                     5.1%]    9[|                      0.6%]  15[||||||                 17.8%]  21[|||                     5.7%]
    4[                       0.0%]   10[                       0.0%]  16[||                      4.4%]  22[||                      4.5%]
    5[||                     4.5%]   11[||                     5.1%]  17[||                      4.5%]  23[|||                     4.5%]
    6[                       0.0%]   12[||                     5.1%]  18[||                      4.5%]  24[||                      4.5%]
  Mem[|||||||||||||||||||||||||||||||                   11.2G/31.3G] Tasks: 252, 1803 thr, 341 kthr; 1 running
  Swp[||||||||||||||||||||||||||||||||||||||||||||||||  3.06G/4.00G] Load average: 0.59 1.19 1.13

    PID USER       PRI  NI  VIRT   RES   SHR S CPU%+MEM%   TIME+  Command
    966 vojta       20   0  448M  9692  5824 S  0.0  0.0  0:01.01 |           |  |  `-
   3130 vojta       20   0 85856  3360  3356 S  0.0  0.0  0:00.03 |           |  `- /usr/lib/eclipse/eclipse -data /home/vojta/mff/eclips
   3174 vojta       20   0 20.8G 1274M 28068 S  0.0  4.0  2h57:34 |           |  |  `- /usr/lib/jvm/java-11-openjdk/bin/java -Dosgi.requi
   3175 vojta       20   0 20.8G 1274M 28068 S  0.0  4.0  1h12:37 |           |  |     `- /usr/lib/jvm/java-11-openjdk/bin/java -Dosgi.re
   3176 vojta       20   0 20.8G 1274M 28068 S  0.0  4.0  2:44.29 |           |  |     `- /usr/lib/jvm/java-11-openjdk/bin/java -Dosgi.re
   3177 vojta       20   0 20.8G 1274M 28068 S  0.0  4.0  0:00.04 |           |  |     `- /usr/lib/jvm/java-11-openjdk/bin/java -Dosgi.re
   3178 vojta       20   0 20.8G 1274M 28068 S  0.0  4.0  0:00.96 |           |  |     `- /usr/lib/jvm/java-11-openjdk/bin/java -Dosgi.re
   3179 vojta       20   0 20.8G 1274M 28068 S  0.0  4.0  0:00.23 |           |  |     `- /usr/lib/jvm/java-11-openjdk/bin/java -Dosgi.re
   3180 vojta       20   0 20.8G 1274M 28068 S  0.0  4.0  8:46.39 |           |  |     `- /usr/lib/jvm/java-11-openjdk/bin/java -Dosgi.re
   3181 vojta       20   0 20.8G 1274M 28068 S  0.0  4.0  3:53.24 |           |  |     `- /usr/lib/jvm/java-11-openjdk/bin/java -Dosgi.re
   3182 vojta       20   0 20.8G 1274M 28068 S  0.0  4.0  0:01.85 |           |  |     `- /usr/lib/jvm/java-11-openjdk/bin/java -Dosgi.re
   3183 vojta       20   0 20.8G 1274M 28068 S  0.0  4.0  0:01.62 |           |  |     `- /usr/lib/jvm/java-11-openjdk/bin/java -Dosgi.re
   3184 vojta       20   0 20.8G 1274M 28068 S  0.0  4.0  0:00.00 |           |  |     `- /usr/lib/jvm/java-11-openjdk/bin/java -Dosgi.re
   3185 vojta       20   0 20.8G 1274M 28068 S  0.0  4.0  0:00.00 |           |  |     `- /usr/lib/jvm/java-11-openjdk/bin/java -Dosgi.re
   3186 vojta       20   0 20.8G 1274M 28068 S  0.0  4.0  1:55.30 |           |  |     `- /usr/lib/jvm/java-11-openjdk/bin/java -Dosgi.re
   3187 vojta       20   0 20.8G 1274M 28068 S  0.0  4.0  0:20.61 |           |  |     `- /usr/lib/jvm/java-11-openjdk/bin/java -Dosgi.re
   3188 vojta       20   0 20.8G 1274M 28068 S  0.0  4.0  0:01.52 |           |  |     `- /usr/lib/jvm/java-11-openjdk/bin/java -Dosgi.re
   3189 vojta       20   0 20.8G 1274M 28068 S  0.0  4.0  0:01.68 |           |  |     `- /usr/lib/jvm/java-11-openjdk/bin/java -Dosgi.re
   3193 vojta       20   0 20.8G 1274M 28068 S  0.0  4.0 17:29.22 |           |  |     `- /usr/lib/jvm/java-11-openjdk/bin/java -Dosgi.re
F1 Help  F2 Setup F3 Search F4 Filter F5 List  F6 SortBy F7 Nice - F8 Nice + F9 Kill  F10 Quit

Similar to MidnightCommander, function keys perform the most important actions and the help is visible in the bottom bar.

Signals

Open two terminals now.

Run the following command in the first one.

sleep 999

In the second terminal, find the PID of this process. Hint.

From the second terminal, we will now kill the program.

kill THE_PID

Looking back at the first terminal, you should see the following message.

Terminated (SIGTERM).

Start the sleep command again, but send KILL this time.

kill -9 THE_PID

The message probably changed to Killed and the process was terminated as well.

Reacting to signals in Python

Your program would typically react to TERM (the default “soft” termination), INT (Ctrl-C from the keyboard) and perhaps to USR1 or USR2 (the only user-defined signals). System daemons (non-interactive programs handling system services) often react to HUP by reloading their configuration.

The following Python program reacts to Ctrl-C by terminating (imports omitted):

# Actual signal callback
def on_signal(signal_number, frame_info):
    print("")
    print("Caught signal {} ({})".format(signal_number, frame_info))
    sys.exit()

def main():
    # Setting signal callback
    signal.signal(signal.SIGINT, on_signal)
    while True:
        time.sleep(0.5)
        print("Hit Ctrl-C...")

if __name__ == '__main__':
    main()

Exercise: write a program that tries to print all prime numbers. When terminating, it stores the highest number found so far and on the next invocation, it continues from there. Solution.

Reacting to a signal in a shell script is also possible using the trap command. Note that a typical action for a signal handler in a shell script is clean-up of temporary files.

#!/bin/bash

set -ueo pipefail

on_interrupt() {
    echo "Interrupted, terminating ..." >&2
    exit 17
}

on_exit() {
    echo "Cleaning up..." >&2
    rm -f "$MY_TEMP"
}

MY_TEMP="$( mktemp )"

trap on_interrupt INT TERM
trap on_exit EXIT

echo "Running as $$"

counter=1
while [ "$counter" -lt 10 ]; do
    date "+%Y-%m-%d %H:%M:%S | Waiting for Ctrl-C (loop $counter) ..."
    echo "$counter" >"$MY_TEMP"
    sleep 1
    counter=$(( counter + 1 ))
done

The command trap receives as the first argument the command to execute on the signal. Other arguments list the signals to react to. Note that a special signal EXIT means normal script termination. Hence, we do not need to call on_exit after the loop terminates.

Using - instead of the handler causes the respective handler to be set to default. Without this reset in on_exit, the handler would be called twice after the user would hit Ctrl-C (first for INT caused by Ctrl-C itself and then by the explicit call to exit).

From now on, your shell scripts shall always include a signal handler for clean-up of temporary files.

Note the use of $$ which prints the current PID. Alternatively, you can use pgrep <program_name> to find a PIDs for running programs. Similarly, you can use killall to kill processes by name (but be careful as with great power comes great responsibility). Consult the manual pages for more details.

Run the above script, note its PID and run the following in a new terminal.

kill THE_PID_PRINTED_BY_THE_ABOVE_SCRIPT

The script was terminated and the clean-up routine was called. Compare with situation when you comment-out the trap command.

Run the script again but pass -9 to kill to specify that you want to send signal nine (i.e., KILL).

What happened? Answer.

While signals are a rudimentary mechanism, which passes binary events with no additional data, they are the primary way of process control in Linux. (If you need a richer communication channel, you can use D-Bus instead.) Reasonable reaction to basic signals is a must for server-style applications (e.g., a web server should react to TERM by completing outstanding requests without accepting new connections, and terminating afterwards). It shell scripts, it is considered good manners to always clean up temporary files.

User account management

So far, we used /etc/passwd for getting information about accounts directly. In practice, things can be more complicated as the information may come from different sources. For example, in IMPAKT labs, the information is somehow fetched from the CAS.

Generally, there can be multiple sources of user accounts. Thus, it is better to use getent to query them all instead of relying on /etc/passwd only:

getent passwd YOUR_GITLAB_LOGIN

You can specify group instead of passwd to query groups, too.

Note that the output is actually in the same format as /etc/passwd. This is on purpose.

Without the login parameter, the command will (usually) list all accounts.

How many student accounts are on the machine u-pl17.ms.mff.cuni.cz? Hint. Solution.

Creating new account

Creating a new account in Linux is rather straightforward. The utility useradd creates a new user by adding the appropriate entry into /etc/passwd and by creating a home directory.

Technically, nothing more needs to be done: the entry in /etc/passwd means that user id (numerical) is assigned to a human-readable name, creation of the home directory ensures that the user has a writable directory to start with.

Practically, one can create a new user by editing /etc/passwd manually, but it is not recommended.

Graded tasks (deadline: Apr 24)

09/passwd.txt (20 points)

Use the machine linux.ms.mff.cuni.cz to retrieve information about your account.

Paste the corresponding line of passwd database about your account into 09/passwd.txt.

Automated tests verify only the format of your answer, not actual correctness.

Hint.

09/signals.txt (25 points)

Run program nswi177-signals on linux.ms.mff.cuni.cz.

You will need to send specific signals in given order to this program to complete this task.

The program will guide you: it will print which signals you are supposed to send.

Copy the last line of output (there will be two numbers) of this program to 09/signals.txt.

This task is only partially checked by automated tests.

09/countdown.sh (35 points)

Write a script which gets one parameter: number of seconds (nonnegative integer) to count down, and each second it prints time left.

You can safely assume that the program will be invoked correctly under all circumstances.

If user hits Ctrl-C during execution (or sends the TERM signal), the script aborts by printing the word Aborted and exits with the status of 17.

Example:

$ ./countdown.sh 10
10
9
8
7
6
5
4
3
2
1

Each line will appear a second after the previous one. Use sleep 1 to wait between printing the lines and before exiting. Therefore, the first number is printed immediately, whereas after the final line with 1 you still have to wait before your script terminates.

Note that the script will be a little bit slower than the requested time in seconds (because of the overhead of calling the sleep command itself) but that is fine.

Example execution when user hits Ctrl-C is as follows (^C denotes the place where the user hit Ctrl-C and is not an output from your program). Note that the FAILED message comes from the echo and serves only to emphasize the non-zero exit code.

$ ./countdown.sh 5 || echo "FAILED"
5
4
^C
Aborted
FAILED

09/dnf.txt (20 points)

Which package version (not program version!) of msim-git is installed on linux.ms.mff.cuni.cz?

This task is only partially checked by automated tests.

Use dnf help to get a list of dnf subcommands and find the right one to finish this task (you will need one of the main commands).

It is fine to provide bare version or version with release.

Learning outcomes

Conceptual knowledge

Conceptual knowledge is about understanding the meaning and context of given terms and putting them into context. Therefore, you should be able to …

  • explain what account types exist on Linux and how they differ

  • explain why mechanisms such as sudo are needed and preferred to working as root all the time

  • explain what is a package and how it is used on Linux

  • explain what is a process signal

Practical skills

Practical skills is usually about usage of given programs to solve various tasks. Therefore, you should be able to …

  • use getent to retrieve information about existing accounts

  • use useradd to create a new user account

  • use a package manager for installation and removal of packages and for system updates

  • use ps and pgrep

  • use htop

  • send a signal to running process