Welcome!

This is the Redox book, which will go through (almost) everything about Redox: design, philosophy, how it works, how you can contribute, how to deploy Redox, and much more.

Please note that this book is currently being (re)written.

If you want to skip straight to trying out Redox, see Getting started.

What is Redox?

Redox is a general purpose operating system written in pure Rust. Our aim is to provide a fully functioning Unix-like microkernel, that is both secure and free.

We have modest compatibility with POSIX, allowing Redox to run many programs without porting.

We take inspiration from Plan9, Minix, Linux, and BSD. Redox aims to synthesize years of research and hard won experience into a system that feels modern and familiar.

At this time, Redox supports:

  • All x86-64 CPUs.
  • Graphics cards with VBE support (all Nvidia, Intel, and AMD cards from the past decade have this).
  • AHCI disks.
  • E1000 or RTL8168 network cards.
  • Intel HDA audio controllers.
  • Mouse and keyboard with PS/2 emulation. TODO: update

This book is broken into the following chapters:

TODO

It is written such that you do not need any prior knowledge in Rust and/or OS development.

Our Goals

Redox is an attempt to make a complete, fully-functioning, general-purpose operating system with a focus on safety, freedom, reliability, correctness, and pragmatism.

We want to be able to use it, without obstructions, as an alternative to Linux on our computers. It should be able to run most Linux programs with only minimal modifications.

We're aiming towards a complete, safe Rust ecosystem. This is a design choice, which hopefully improves correctness and security (see Why Rust).

We want to improve the security design when compared to other Unix-like kernels by using safe defaults and disallowing insecure configurations where possible.

The non-goals of Redox

We are not a Linux clone, or POSIX-compliant, nor are we crazy scientists, who wish to redesign everything. Generally, we stick to well-tested and proven correct designs. If it ain't broken don't fix it.

This means that a large number of standard programs and libraries will be compatible with Redox. Some things that do not align with our design decisions will have to be ported.

The key here is the trade off between correctness and compatibility. Ideally, you should be able achieve both, but unfortunately, you can't always do so.

Our Philosophy

We believe in free software.

Redox OS will be packaged only with compatible free software, to ensure that the entire default distribution may be inspected, modified, and redistributed. Software that does not allow these features, i.e. proprietary software, is against the goals of security and freedom and will not be endorsed by Redox OS. We endorse the GNU Free System Distribution Guidelines.

To view a list of compatible licenses, please refer to the GNU List of Licenses.

Redox OS is predominately MIT X11-style licensed, including all software, documentation, and fonts. There are only a few exceptions to this:

The MIT X11-style license has the following properties:

  • It gives you, the user of the software, complete and unrestrained access to the software, such that you may inspect, modify, and redistribute your changes
    • Inspection Anyone may inspect the software for security vulnerabilities
    • Modification Anyone may modify the software to fix security vulnerabilities
    • Redistribution Anyone may redistribute the software to patch the security vulnerabilities
  • It is compatible with GPL licenses - Projects licensed as GPL can be distributed with Redox OS
  • It allows for the incorporation of GPL-incompatible free software, such as OpenZFS, which is CDDL licensed

The license does not restrict the software that may run on Redox, however -- and thanks to the microkernel architecture, even traditionally tightly-coupled components such as drivers can be distributed separately, so maintainers are free to choose whatever license they like for their projects.

Why Redox?

There are plenty of operating systems out there. It's natural to wonder why we should build a new one. Wouldn't it be better to contribute to an existing project?

The Redox community believes that existing projects fall short, and that our goals are best served by a new project built from scratch.

Let's consider 3 existing projects.

Linux

Linux runs the world, and boots on everything from high performance servers to tiny embedded devices. Indeed, many Redox community members run Linux as their main workstations. However, Linux is not an ideal platform for new innovation in OS development.

  • Legacy until infinity: Old syscalls stay around forever, drivers for long-unbuyable hardware stay in the kernel as mandatory parts. While they can be disabled, running them in kernel space is unnecessary, and can be a source of system crashes, security issues, and unexpected bugs.
  • Huge codebase: To contribute, you must find a place to fit in to nearly 25 million lines of code, in just the kernel. This is due to Linux's monolithic architecture.
  • Non-permissive license: Linux is licensed under GPL2, preventing the use of other free software licenses inside of the kernel. For more on why, see Our Philosophy.
  • Lack of memory safety: Linux has had numerous issues with memory safety throughout time. C is a fine language, but for such a security critical system, C is difficult to use safely.

BSD

It is no secret that we're more in favor of BSD. The BSD community has led the way in many innovations in the past 2 decades. Things like jails and ZFS yield more reliable systems, and other operating systems are still catching up.

That said, BSD doesn't meet our needs either:

  • It still has a monolithic kernel. This means that a single buggy driver can crash, hang, or, in the worst case, cause damage to the system.
  • The use of C in the kernel makes it probable to write code with memory safety issues.

MINIX

And what about MINIX? Its microkernel design is a big influence on the Redox project, especially for reasons like reliability. MINIX is the most in line with Redox's philosophy. It has a similar design, and a similar license.

  • Use of C - again, we would like drivers and the kernel to be written in Rust, to improve readability and organization, and to catch more potential safety errors. Compared to monolithic kernels, Minix is actually a very well-written and manageable code base, but it is still prone to memory unsafety bugs, for example. These classes of bugs can unfortunately be quite fatal, due to their unexpected nature.
  • Lack of driver support - MINIX does not work well on real hardware, partly due to having less focus on real hardware.
  • Less focus on "Everything is a File" - MINIX does focus less on "Everything is a File" than various other operating systems, like Plan9. We are particularly focused on this idiom, for creating a more uniform program infrastructure.

The Need for Something New

We have to admit, that we do like the idea of writing something that is our own (Not Invented Here syndrome). There are numerous places in the MINIX 3 source code where we would like to make changes, so many that perhaps a rewrite in Rust makes the most sense.

  • Different VFS model, based on URLs, where a program can control an entire segmented filesystem
  • Different driver model, where drivers interface with filesystems like network: and audio: to provide features
  • Different file system, RedoxFS, with a TFS implementation in progress
  • User space written mostly in Rust
  • Orbital, a new GUI

How Redox Compares to Other Operating Systems

We share quite a lot with other operating systems.

Syscalls

The Redox syscall interface is Unix-y. For example, we have open, pipe, pipe2, lseek, read, write, brk, execv, and so on. Currently, we support the 31 most common Linux syscalls.

Compared to Linux, our syscall interface is much more minimal. This is not because of the stage in development, but because of a minimalist design.

"Everything is a URL"

This is an generalization of "Everything is a file", largely inspired by Plan 9. In Redox, "resources" (TODO: link) can be both socket-like and file-like, making them fast enough to use for virtually everything.

This way, we get a more unified system API. We will explain this later, in URLs, schemes, and resources.

The kernel

Redox's kernel is a microkernel. The architecture is largely inspired by MINIX.

In contrast to Linux or BSD, Redox has only 16,000 lines of kernel code, a number that is often decreasing. Most services are provided in user space.

Having vastly smaller amounts of code in the kernel makes it easier to find and fix bugs/security issues more efficiently. Andrew Tanenbaum (author of MINIX) stated that for every 1,000 lines of properly written code, there is a bug. This means that for a monolithic kernel with nearly 25,000,000 lines of code, there could be nearly 25,000 bugs. A microkernel with only 16,000 lines of code would mean that around 16 bugs exist.

It should be noted that the extra lines are simply based outside of kernel space, making them less dangerous, not necessarily a smaller number.

The main idea is to have components and drivers that would be inside a monolithic kernel exist in user space and follow the Principle of Least Authority (POLA). This is where every individual component is:

  • Completely isolated in memory and as a separate user process
    • The failure of one component does not crash other components
    • Foreign and untrusted code does not expose the entire system
    • Bugs and malware cannot spread to other components
  • Has restricted communication with other components
  • Doesn't have Admin/Super-User privileges
    • Bugs are moved to user space which reduces their power

All of this increases the reliability of the system significantly. This is useful for mission-critical applications and for users that want minimal issues with their computer systems.

Why Rust?

Why write an operating system in Rust? Why even write in Rust?

Rust has enormous advantages, because for operating systems, safety matters. A lot, actually.

Since operating systems are such an integrated part of computing, they are a very security critical component.

There have been numerous bugs and vulnerabilities in Linux, BSD, Glibc, Bash, X, etc. throughout time, simply due to the lack of memory and type safety. Rust does this right, by enforcing memory safety statically.

Design does matter, but so does implementation. Rust attempts to avoid these unexpected memory unsafe conditions (which are a major source of security critical bugs). Design is a very transparent source of issues. You know what is going on, you know what was intended and what was not.

The basic design of the kernel/user space separation is fairly similar to Unix-like systems, at this point. The idea is roughly the same: you separate kernel and user space, through strict enforcement by the kernel, which manages memory and other critical resources.

However, we have an advantage: enforced memory and type safety. This is Rust's strong side -- a large number of "unexpected bugs" (for example, undefined behavior) are eliminated at compile time.

The design of Linux and BSD is secure. The implementation is not:

Click on the above links. You'll probably notice that many are bugs originating in unsafe conditions (which Rust effectively eliminates) like buffer overflows, not the overall design.

We hope that using Rust will produce a more secure operating system in the end.

Unsafes

unsafe is a way to tell Rust that "I know what I'm doing!", which is often necessary when writing low-level code, providing safe abstractions. You cannot write a kernel without unsafes.

In that light, a kernel cannot be 100% safe, however the unsafe parts have to be marked with an unsafe, which keeps the unsafe parts segregated from the safe code. We seek to eliminate the unsafes where we can, and when we use unsafes, we are extremely careful.

A quick grep gives us some stats: the kernel has about 300 invocations of unsafe in about 16,000 lines of code overall. Every one of these is carefully audited to ensure correctness.

This contrasts with kernels written in C, which cannot make guarantees about safety without costly formal analysis.

You can find out more about how unsafe works in the relevant section of the Rust book.

Side projects

Redox is a complete Rust operating system. In addition to the kernel, we are developing several side projects, including:

  • TFS: A file system inspired by ZFS.
  • Ion: The Redox shell.
  • Orbital: The display server of Redox.
  • OrbTK: A widget toolkit.
  • pkgutils: Redox's package management library and its command-line frontend.
  • Sodium: A Vi-like editor.
  • ralloc: A memory allocator.
  • libextra: Supplement for libstd, used throughout the Redox code base.
  • games-for-redox: A collection of mini-games for Redox (alike BSD-games).
  • and a few other exciting projects you can explore here.

We also have three utility distributions, which are collections of small, useful command-line programs:

  • Coreutils: A minimal set of utilities essential for a usable system.
  • Extrautils: Extra utilities such as reminders, calendars, spellcheck, and so on.
  • Binutils: Utilities for working with binary files.

We also actively contribute to third party projects that are heavily used in Redox.

What tools are fitting for the Redox distribution?

Some of these tools will in the future be moved out of the default distribution, into separate optional packages. Examples of these are Orbital, OrbTK, Sodium, and so on.

The listed tools fall into three categories:

  1. Critical, which are needed for a full functioning and usable system.
  2. Ecosystem-friendly, which are there for establishing consistency within the ecosystem.
  3. Fun, which are "nice" to have and are inherently simple.

The first category should be obvious: an OS without certain core tools is a useless OS. The second category contains the tools which are likely to be non-default in the future, but nonetheless are in the official distribution right now, for the charm. The third category is there for convenience: namely for making sure that the Redox infrastructure is consistent and integrated (e.g., pkgutils, OrbTK, and libextra).

It is important to note we seek to avoid non-Rust tools, for safety and consistency (see Why Rust).

Getting started

Preparing the build has information about setting up your system to compile Redox, which is necessary if you want to contribute to Redox development.

If you aren't (currently) interested in going through the trouble of building Redox, you can download the latest release. See the instructions for running in a virtual machine or running on real hardware.

Trying Redox in a virtual machine

The ISO image is not the prefered way to run Redox in a virtual machine. Currently the ISO image loads the entire hard disk image (including unused space) into memory. In the future, the live disk should be improved so that doesn't happen.

Instead, you want to use the harddrive image, which you can find here (version 0.6.0) as a .bin.gz file. Download and extract that file.

You can then run it in your prefered emulator; this command will run qemu with various features Redox can use enables:

qemu-system-x86_64 -serial mon:stdio -d cpu_reset -d guest_errors -smp 4 -m 1024 -s -machine q35 -device ich9-intel-hda -device hda-duplex -net nic,model=e1000 -net user -device nec-usb-xhci,id=xhci -device usb-tablet,bus=xhci.0 -enable-kvm -cpu host -drive file=harddrive.bin,format=raw

If necessary, change harddrive.bin to the path of the .bin file you just extracted.

Once the system is fully booted, you will be greeted by the RedoxOS login screen. In order to login, enter the following information:

User = root
password = password

Running Redox on real hardware

Currently, Redox only natively supports booting from a hard disk with no partition table. Therefore, the current ISO image uses a bootloader to load the filesystem into memory and emulates one. This is inefficent and requires a somewhat large amount of memory, which will be fixed once proper support for various things (such as a USB mass storage driver) are implemented.

Despite the awkward way it works, the ISO image is the recomended way to try out Redox on real hardware (in an emulator, a virtual hard drive is better). You can obtain an ISO image either by downloading the latest release, or by building one with make iso from the Redox source tree.

You can create a bootable CD or USB drive from the ISO as with other bootable disk images.

Hardware support is limited at the moment, so your milage may vary. There is no USB HID driver, so a USB keyboard or mouse will not work. There is a PS/2 driver, which works with the keyboards and touchpads in many laptops. For networking, the rtl8168d and e1000d ethernet controllers are currently supported.

Redox isn't currently going to replace your existing OS, but it's a fun thing to try; boot Redox on your computer, and see what works.

Preparing the build

Woah! You made it so far, all the way to here. Congrats! Now we gotta build Redox.

FIRST-TIME BEGINNERS

Bootstrap Pre-Requisites And Fetch Sources

If you're on a Linux or macOS computer, you can just run the bootstrapping script, which does the build preparation for you. Run the following commands:

$ mkdir -p ~/tryredox
$ cd ~/tryredox
$ curl -sf https://gitlab.redox-os.org/redox-os/redox/raw/master/bootstrap.sh -o bootstrap.sh
$ bash -e bootstrap.sh

The above does the following:

  • creates a parent folder called "tryredox". Within that folder, it will create another folder called "redox" where all the sources will reside.
  • installs the pre-requisite packages using your operating system's package manager(popos/ubuntu/debian apt, redhat/centos/fedora dnf, archlinux pacman).
  • clones the Redox code from GitLab and checks out a redox-team tagged version of the different subprojects intended for the community to test and submit success/bug reports for.

Please be patient this can take 5 minutes to an hour depending on the hardware and network you're running it on.

Tweaking the filesystem size

The filesystem size is specified in MegaBytes. The default is 256MB.

You probably might want a bigger size like 20GB(20480MB).

  • Open with your favourite text editor(vim or emacs) redox/mk/config.mk
    cd ~/tryredox/
    gedit redox/mk/config.mk &
    
  • Look for FILESYSTEM_SIZE and change the value in MegaBytes
    FILESYSTEM_SIZE?=20480
    

Customize Settings In config.mk Before Starting To Build

WIP this section needs more work.

Open with your favourite text editor(vim or emacs) redox/mk/config.mk

What are the interesting settings users might want to change?

Advanced Users

Advanced users may accomplish the same as the above bootstrap.sh script with the following steps.

Be forewarned, the documentation can't keep up with the bootstrap.sh script since there are so many distros from which to build Redox-Os from: MacOS, PopOS, Archlinux, Redhat/Centos/Fedora.

NOTE: The core redox-os developers use PopOs to build Redox-Os. We recommend to use PopOs for repeatable zero-painpoint Redox-os builds.

Clone the repository

Change to the folder where you want your copy of Redox to be stored and issue the following command:

$ mkdir -p ~/tryredox
$ cd ~/tryredox
$ git clone https://gitlab.redox-os.org/redox-os/redox.git --origin upstream --recursive
$ cd redox
$ git submodule update --recursive --init

Please be patient this can take 5 minutes to an hour depending on the hardware and network you're running it on.

Install Pre-Requisite Packages

Pop OS Linux Users:

$ sudo apt-get install cmake make nasm qemu pkg-config libfuse-dev wget gperf libhtml-parser-perl autoconf flex autogen po4a expat openssl automake aclocal

ArchLinux Users:

$ sudo pacman -S cmake make nasm qemu pkg-config libfuse-dev wget gperf libhtml-parser-perl autoconf flex autogen po4a expat openssl automake aclocal

Fedora/Redhat/Centos Linux Users:

$ sudo dnf install cmake make nasm qemu pkg-config libfuse-dev wget gperf libhtml-parser-perl autoconf flex autogen po4a expat openssl automake aclocal
$ sudo dnf install gettext-devel bison flex perl-HTML-Parser libtool perl-Pod-Xhtml gperf libpng-devel patch
$ sudo dnf install perl-Pod-Html

MacOS Users using MacPorts:

$ sudo port install make nasm qemu gcc7 pkg-config osxfuse x86_64-elf-gcc

MacOS Users using Homebrew:

$ brew install automake bison gettext libtool make nasm qemu gcc@7 pkg-config Caskroom/cask/osxfuse
$ brew install redox-os/gcc_cross_compilers/x86_64-elf-gcc

Install Rust Stable And Nightly

Install Rust, make the nightly version your default toolchain, the list the installed toolchains:

$ curl https://sh.rustup.rs -sSf | sh
$ rustup default nightly
$ rustup toolchain list
$ cargo install --force --version 0.3.20 xargo

NOTE: xargo allows redox-os to have a custom libstd

NOTE: ~/.cargo/bin has been added to your PATH for the running session.

Add the following line to your shell start-up file, like ".bashrc" or ".bash_profile" for future rust sessions:

export PATH=${PATH}:~/.cargo/bin

Updating The Sources

How to update submodules using make pull

cd ~/tryredox/redox
make pull

How to update package sources using make fetch

cd ~/tryredox/redox
make fetch

Change Sources To Build ARM 64-bit Redox OS Image

WIP this section needs more work

cd ~/tryredox/redox
make distclean # important to remove x86_64 stuff
git remote update
git fetch
git restore mk/config.mk
git checkout aarch64-rebase
git pull
git submodule update --init --recursive
git rev-parse HEAD
git pull

Install Additional Tools To Build And Run ARM 64-bit Redox OS Image

sudo apt-get install u-boot-tools
sudo apt-get install qemu-system-arm qemu-efi

Next steps

Once this is all set up, we can finally compile Redox.

Building/Compiling The Entire Redox Project

Now we have:

  • fetched the sources
  • tweaked the settings to our liking
  • possibly added our very own source/binary package to the Filesystem.toml

We are ready to build the entire Redox Operating System Image.

Build Redox image

Building Redox-OS for x86_64:

$ cd ~/tryredox/redox/
$ time make all

Building Redox-OS for aarch64/arm64:

$ cd ~/tryredox/redox/
time ./aarch64.sh

Give it a while. Redox is big.

  • "make all" fetches the sources for the core tools from the Redox-os source servers, then builds them.
  • creates a few empty files holding different parts of the final image filesystem.
  • using the newly built core tools, it builds the non-core packages into one of those filesystem parts
  • fills the remaining filesystem parts appropriately with stuff built by the core tools to help boot Redox.
  • merges the the different filesystem parts into a final Redox Operating System harddrive image ready-to-run in Qemu.

Cleaning Previous Build Cycles

Cleaning Intended For Rebuilding Core Packages And Entire System

When you need to rebuild core-packages like relibc, gcc and related tools, clean the entire previous build cycle with:

cd ~/tryredox/redox/
rm -rf prefix/x86_64-unknown-redox/relibc-install/ cookbook/recipes/gcc/{build,sysroot,stage*} build/filesystem.bin

or try touching:

cd ~/tryredox/redox/
touch initfs.toml
touch filesystem.toml
Cleaning Intended For Only Rebuilding Non-Core Package(s)

If you're only rebuilding a non-core package, you partially clean the previous build cycle just enough to force rebuilding the Non-Core Package:

cd ~/tryredox/redox/
rm build/filesystem.bin

or try touching:

cd ~/tryredox/redox/
touch filesystem.toml

Running Redox

Running The Redox Desktop

To run Redox, do:

$ make qemu

This should open up a Qemu window, booting to Redox.

If it does not work, try:

$ make qemu kvm=no # we disable KVM

or:

$ make qemu iommu=no

If this doesn't work either, you should go open an issue.

Running The Redox Console Only

We disable to GUI desktop by passing "vga=no". The following disables the graphics support and welcomes you with the Redox console:

$ make qemu vga=no 

It is advantageous to run the console in order to capture the output from the non-gui applications. I helps to debug applications and share the console captured logs with other developers in the redox community.

Running The Redox Console With A Qemu Tap For Network Testing

Expose Redox to other computers within a LAN. Configure Qemu with a "TAP" which will allow other computers to test Redox client/server/networking capabilities.

Here are the steps to configure Qemu Tap: WIP

Note

If you encounter any bugs, errors, obstructions, or other annoying things, please report the issue to the Redox repository. Thanks!

Trying Out Redox

Use F2 key to get to a login shell. User user can login without password. For root, the password is password for now. help lists builtin commands for your shell (ion). ls /bin will show a list of applications you can execute.

Use F3 key to switch to a graphical user interface (orbital). Log in with the same username/password combinations as above.

Use the F1 key to get back to kernel output.

Sodium

Sodium is Redox's Vi-like editor. To try it out,

  1. Open the terminal by clicking the icon in the button bar
  2. Type sudo pkg install sodium to install Sodium. You will need network for this.
  3. Type sodium. This should now open up a separate editor window.

A short list of the Sodium defaults:

  • hjkl: Navigation keys.
  • ia: Go to insert/append mode.
  • ;: Go to command-line mode.
  • shift-space: Go to normal mode.

For a more extensive list, write ;help.

Setting a reminder/countdown

To demonstrate the ANSI support, we will play around with fancy reminders.

Open up the terminal emulator. Now, write rem -s 10 -b. This will set a 10 sec. countdown with progress bar.

Playing around with Rusthello

Rusthello is an advanced Reversi AI, made by HenryTheCat. It is highly concurrent, so this proves Redox's multithreading capabilities. It supports various AIs, such as brute forcing, minimax, local optimizations, and hybrid AIs.

Oh, let's try it out!

# install rusthello by typing command
$ sudo pkg install games
# start it with command
$ rusthello

Then you will get prompted for various things, such as difficulty, AI setup, and so on. When this is done, Rusthello interactively starts the battle between you and an AI or an AI and an AI.

Exploring OrbTK

Click the OrbTK demo app in the menu bar. This will open a graphical user interface that demonstrates the different widgets OrbTK currently supports.

Questions, Feedback, Reporting Issues

Join the Redox-os Chat Server. It is the best method to chat with the Redox-os team. Hang out with other Redox-os fans! The Redox-os Chat Server holds many channels focusing on different aspects of redox-os development and usage. Send an email to info@redox-os.org and simply say you'd like to join the chat.

Alternatively you can use our Discourse Redox-os Forum.

If you would like to report Redox-Os Issues, please do so here Gitlab Redox Project Issues and click the New Issue button.

Explore

This chapter will be dedicated to exploring every aspect of a running Redox system, in gruesome detail.

We will start with the boot system, continuing to the shell and command-line utilities, moving on to the GUI, all while explaining where things happen, and how to change them.

Redox is meant to be an insanely customizable system, allowing a user to tear it down to a very small command-line distro, or build it up to a full desktop environment with ease.

Boot Process

Bootloader

The first code to be executed is the boot sector in bootloader/${ARCH}/bootsector.asm. This loads the bootloader from the first partition. In Redox, the bootloader finds the kernel and loads it in full at address 0x100000. It also initializes the memory map and the VESA display mode, as these rely on BIOS functions that cannot be accessed easily once control is switched to the kernel.

Kernel

The kernel is entered through the interrupt table, with interrupt 0xFF. This interrupt is only available in the bootloader. By utilizing this method, all kernel entry can be contained to a single function, the kernel function, found in kernel/main.rs, that serves as the entry point in the kernel.bin executable file.

At this stage, the kernel copies the memory map out of low memory, sets up an initial page mapping, allocates the environment object, defined in kernel/env/mod.rs, and begins initializing the drivers and schemes that are embedded in the kernel. This process will print out kernel information such as the following:

Redox 32 bits
  * text=101000:151000 rodata=151000:1A4000
  * data=1A4000:1A5000 bss=1A5000:1A6000
 + PS/2
   + Keyboard
     - Reset FA, AA
     - Set defaults FA
     - Enable streaming FA
   + PS/2 Mouse
     - Reset FA, AA
     - Set defaults FA
     - Enable streaming FA
 + IDE on 0, 0, 0, 0, C120, IRQ: 0
   + Primary on: C120, 1F0, 3F4, IRQ E
     + Master: Status: 58 Serial: QM00001 Firmware: 2.0.0 Model: QEMUHARDDISK 48-bit LBA Size: 128 MB
     + Slave: Status: 0
   + Secondary on: C128, 170, 374, IRQ F
     + Master: Status: 41 Error: 2
     + Slave: Status: 0

After initializing the in-kernel structures, drivers, and schemes, the first userspace process spawned by the kernel is the init process, more specifically the initfs:/bin/init process.

Init

Redox has a multi-staged init process, designed to allow for the loading of disk drivers in a modular and configurable fashion. This is commonly referred to as an init ramdisk.

Ramdisk Init

The ramdisk init has the job of loading the drivers required to access the root filesystem and then transfer control to the userspace init. This is a filesystem that is linked with the kernel and loaded by the bootloader as part of the kernel image. You can see the code associated with the init process in crates/init/main.rs.

The ramdisk init loads, by default, the file /etc/init.rc, which may be found in initfs/etc/init.rc. This file currently has the contents:

echo ############################
echo ##  Redox OS is booting   ##
echo ############################
echo

# Load the filesystem driver
initfs:/bin/redoxfsd disk:/0

# Start the filesystem init
cd file:/
init

As such, it is very easy to modify Redox to load a different filesystem as the root, or to move processes and drivers in and out of the ramdisk.

Filesystem Init

As seen above, the ramdisk init has the job of loading and starting the filesystem init. By default, this will mean that a new init process will be spawned that loads a new configuration file, now in the root filesystem at filesystem/etc/init.rc. This file currently has the contents:

echo ############################
echo ##  Redox OS has booted   ##
echo ##  Press enter to login  ##
echo ############################
echo

# Login process, handles debug console
login

Modifying this file allows for booting directly to the GUI. For example, we could replace login with orbital.

Login

After the init processes have set up drivers and daemons, it is possible for the user to log in to the system. A simple login program is currently used, it's source may be found in crates/login/main.rs

The login program accepts a username, currently any username may be used, prints the /etc/motd file, and then executes sh. The motd file can be configured to print any message, it is at filesystem/etc/motd and currently has the contents:

############################
##  Welcome to Redox OS   ##
##  For GUI: Run orbital  ##
############################

At this point, the user will now be able to access the Shell

Graphical overview

Here is an overview of the initialization process with scheme creation and usage. For simplicity's sake, we do not depict all scheme interaction but at least the major ones.

Redox initialization graph

Shell

The shell used in Redox is ion.

When ion is called without "-c", it starts a main loop, which can be found inside Shell.execute().

        self.print_prompt();
        while let Some(command) = readln() {
            let command = command.trim();
            if !command.is_empty() {
                self.on_command(command, &commands);
            }
            self.update_variables();
            self.print_prompt();
        }

self.print_prompt(); is used to print the shell prompt.

The readln() function is the input reader. The code can be found in crates/ion/src/input_editor.

The documentation about trim() can be found here. If the command is not empty, the on_command method will be called. Then, the shell will update variables, and reprint the prompt.

fn on_command(&mut self, command_string: &str, commands: &HashMap<&str, Command>) {
    self.history.add(command_string.to_string(), &self.variables);

    let mut pipelines = parse(command_string);

    // Execute commands
    for pipeline in pipelines.drain(..) {
        if self.flow_control.collecting_block {
            // TODO move this logic into "end" command
            if pipeline.jobs[0].command == "end" {
                self.flow_control.collecting_block = false;
                let block_jobs: Vec<Pipeline> = self.flow_control
                                               .current_block
                                               .pipelines
                                               .drain(..)
                                               .collect();
                match self.flow_control.current_statement.clone() {
                    Statement::For(ref var, ref vals) => {
                        let variable = var.clone();
                        let values = vals.clone();
                        for value in values {
                            self.variables.set_var(&variable, &value);
                            for pipeline in &block_jobs {
                                self.run_pipeline(&pipeline, commands);
                            }
                        }
                    },
                    Statement::Function(ref name, ref args) => {
                        self.functions.insert(name.clone(), Function { name: name.clone(), pipelines: block_jobs.clone(), args: args.clone() });
                    },
                    _ => {}
                }
                self.flow_control.current_statement = Statement::Default;
            } else {
                self.flow_control.current_block.pipelines.push(pipeline);
            }
        } else {
            if self.flow_control.skipping() && !is_flow_control_command(&pipeline.jobs[0].command) {
                continue;
            }
            self.run_pipeline(&pipeline, commands);
        }
    }
}

First, on_command adds the commands to the shell history with self.history.add(command_string.to_string(), &self.variables);.

Then the script will be parsed. The parser code is in crates/ion/src/peg.rs. The parse will return a set of pipelines, with each pipeline containing a set of jobs. Each job represents a single command with its arguments. You can take a look in crates/ion/src/peg.rs.

pub struct Pipeline {
    pub jobs: Vec<Job>,
    pub stdout: Option<Redirection>,
    pub stdin: Option<Redirection>,
}
pub struct Job {
    pub command: String,
    pub args: Vec<String>,
    pub background: bool,
}

What Happens Next:

  • If the current block is a collecting block (a for loop or a function declaration) and the current command is ended, we close the block:
    • If the block is a for loop we run the loop.
    • If the block is a function declaration we push the function to the functions list.
  • If the current block is a collecting block but the current command is not ended, we add the current command to the block.
  • If the current block is not a collecting block, we simply execute the current command.

The code blocks are defined in crates/ion/src/flow_control.rs.

pub struct CodeBlock {
    pub pipelines: Vec<Pipeline>,
}

The function code can be found in crates/ion/src/functions.rs.

The execution of pipeline content will be executed in run_pipeline().

The Command class inside crates/ion/src/main.rs maps each command with a description and a method to be executed. For example:

commands.insert("cd",
                Command {
                    name: "cd",
                    help: "Change the current directory\n    cd <path>",
                    main: box |args: &[String], shell: &mut Shell| -> i32 {
                        shell.directory_stack.cd(args, &shell.variables)
                    },
                });

cd is described by "Change the current directory\n cd <path>", and when called the method shell.directory_stack.cd(args, &shell.variables) will be used. You can see its code in crates/ion/src/directory_stack.rs.

GUI

The desktop environment in Redox, referred to as Orbital, is provided by a set of programs that run in userspace:

Programs

The following are command-line utilities that provide GUI services

orbital

The orbital display and window manager sets up the orbital: scheme, manages the display, and handles requests for window creation, redraws, and event polling

launcher

The launcher multi-purpose program that scans the applications in the /apps/ directory and provides the following services:

Called Without Arguments

A taskbar that displays icons for each application

Called With Arguments

An application chooser that opens a file in a matching program

  • If one application is found that matches, it will be opened automatically
  • If more than one application is found, a chooser will be shown

Applications

The following are GUI utilities that can be found in the /apps/ directory.

Calculator

A calculator that provides similar functionality to the calc program

Editor

A simple editor that is similar to notepad

File Browser

A file browser that displays icons, names, sizes, and details for files. It uses the launcher command to open files when they are clicked

Image Viewer

A simple image viewer

Pixelcannon

A 3d renderer that can be used for benchmarking the Orbital desktop.

Sodium

A vi-like editor that provides syntax highlighting

Terminal Emulator

An ANSI terminal emulator that launches sh by default.

The design of Redox

This chapter will go over the design of Redox: the kernel, the user space, the ecosystem, the trade-offs and much more.

Components of Redox

Redox is made up of several discrete components.

  • ion - shell
  • TFS/RedoxFS - filesystem
  • kernel
  • drivers
  • orbital - DE/WM/Display Server

Orbital subcomponents

  • orbterm - terminal
  • orbdata - images, fonts, etc.
  • orbaudio - audio
  • orbutils - bunch of applications
  • orblogin - login prompt
  • orbtk - like gtk but orb
  • orbfont - font rendering library
  • orbclient - display client
  • orbimage - image rendering library

Core Applications

  • Sodium - text editor
  • orbutils
    • background
    • browser
    • calculator
    • character map
    • editor
    • file manager
    • launcher
    • viewer

URLs, Schemes, and Resources

This is one of the most important design choices Redox makes. These three essential concepts are very entangled.

What does "Everything is a URL" mean?

"Everything is a URL" is a generalization of "Everything is a file", allowing broader use of this unified interface for schemes.

These can be used for effectively modulating the system in a "non-patchworky" manner.

The term is rather misleading, since a URL is just the identifier of a scheme and a resource descriptor. So in that sense "Everything is a scheme, identified by a URL" is more accurate, but not very catchy.

So, how does it differ from files?

You can think of URLs as segregated virtual file systems, which can be arbitrarily structured (they do not have to be tree-like) and arbitrarily defined by a program. Furthermore, "files" don't have to behave file-like! More on this later.

It opens up a lot of possibilities.

[... TODO]

The idea of virtual file systems is not a new one. If you are on a Linux computer, you should try to cd to /proc, and see what's going on there.

Redox extends this concept to a much more powerful one.

TODO

URLs

The URL itself is a relatively uninteresting (yet very important) notion for the design of Redox. The interesting part is what it represents.

The URL

In short, a URL is an identifier of a resource. It contains two parts:

  1. The scheme part. This represents the "receiver", i.e. what scheme will handle the (F)OPEN call. This can be any arbitrary UTF-8 string, and will often simply be the name of your protocol.

  2. The reference part. This represents the "payload" of the URL, namely what the URL refers to. Consider file, as an example. A URL starting with file: simply has a reference which is a path to a file. The reference can be any arbitrary byte string. The parsing, interpretation, and storage of the reference is left to the scheme. For this reason, it is not required to be a tree-like structure.

So, the string representation of a URL looks like:

[scheme]:[reference]

For example:

file:/path/to/myfile

Note that // is not required, for convenience.

Opening a URL

URLs can be opened, yielding schemes, which can be opened to resources, which can be read, written and (for some resources) seeked (there are more operations which are described later on).

For compatibility reasons, we use a file API similar to the Rust standard library's for opening URLs:

use std::fs::OpenOptions;
use std::io::prelude::*;


fn main() {
    // Let's read from a TCP stream
    let tcp = OpenOptions::new()
                .read(true) // readable
                .write(true) // writable
                .open("tcp:0.0.0.0");
}

TODO: Maybe do something with the tcp stream. Ping-pong?

TODO: The terminology may be somewhat confusing for the reader.

How do URLs work under the hood?

Representation

Since it is impossible to go from user space to ring 0 in a typed manner, we have to use some weakly typed representation (that is, we can't use an enum, unless we want to do transmutations and friends). Therefore, we use a string-like representation when moving to kernel space. This is basically just a raw pointer to a C-like, null-terminated string. To avoid further overhead, we use more efficient representations:

Url<'a>

The first of the three representations is the simplest one. It consists of a struct containing two fat pointers, representing the scheme and the reference respectively.

OwnedUrl

This is a struct containing two Strings (that is, growable, heap-allocated UTF-8 string), being the scheme and the reference respectively.

CowUrl<'a>

This is a Copy-on-Write (CoW) URL, which, when mutated, gets cloned to heap. This way, you get efficient conditional allocation of the URL.

Not much fanciness here.

Opening a URL

Opening URLs happens through the OPEN system call. OPEN takes a C-like, null-terminated string, and two pointer-sized integers, keeping the open flags and the file mode, respectively.

The path argument of OPEN does not have to be a URL. For compatibility reasons, it will default to the file: scheme. If otherwise specified, the scheme will be resolved by the registrar (see The root scheme), and then opened.

TODO

Schemes

Schemes are the natural counter-part to URLs. URLs are opened to schemes, which can then be opened to yield a resource.

Schemes are named such that the kernel is able to uniquely identify them. This name is used in the scheme part of the URL.

Schemes are a generalization of file systems. It should be noted that schemes do not necessarily represent normal files; they are often a "virtual file" (i.e., an abstract unit with certain operations defined on it).

Throughout the whole ecosystem of Redox, schemes are used as the main communication primitive because they are a powerful abstraction. With schemes Redox can have one unified I/O interface.

Schemes can be defined both in user space and in kernel space but when possible user space is preferred.

Kernel Schemes

The kernel provides a small number of schemes in order to support userspace.

Name Description Links
: Root scheme - allows the creation of userspace schemes Docs
debug: Provides access to serial console Docs
event: Allows reading of `Event`s which are registered using fevent Docs
env: Access and modify environmental variables Docs
initfs: Readonly filesystem used for initializing the system Docs
irq: Allows userspace handling of IRQs Docs
pipe: Used internally by the kernel to implement pipe Docs
sys: System information, such as the context list and scheme list Docs

Userspace Schemes

The Redox userspace, starting with initfs:bin/init, will create schemes during initialization. Once the user is able to log in, the following should be established:

Name Daemon Description
disk: ahcid Raw access to disks
display: vesad Screen multiplexing of the display, provides text and graphical screens, used by orbital:
ethernet: ethernetd Raw ethernet frame send/receive, used by ip:
file: redoxfs Root filesystem
ip: ipd Raw IP packet send/receive
network: e1000d
rtl8168d
Link level network send/receive, used by ethernet:
null: nulld Scheme that will discard all writes, and read no bytes
orbital: orbital Windowing system
pty: ptyd Psuedoterminals, used by terminal emulators
rand: randd Psuedo-random number generator
tcp: tcpd TCP sockets
udp: udpd UDP sockets
zero: zerod Scheme that will discard all writes, and always fill read buffers with zeroes

Scheme operations

What makes a scheme a scheme? Scheme operations!

A scheme is just a data structure with certain functions defined on it:

  1. open - open the scheme. open is used for initially starting communication with a scheme; it is an optional method, and will default to returning ENOENT.

  2. mkdir - make a new sub-structure. Note that the name is a little misleading (and it might even be renamed in the future), since in many schemes mkdir won't make a directory, but instead perform some form of substructure creation.

Optional methods include:

  1. unlink - remove a link (that is a binding from one substructure to another).

  2. link - add a link.

The Root Scheme

The root scheme is the kernel scheme which is used for registering and retrieving information about schemes. The root scheme's name is simply an empty string ("").

Registering a Scheme

Registering a scheme is done by opening the name of the scheme with the CREATE flag, in the root scheme.

TODO

Resources

Resources are opened schemes. You can think of them like an established connection between the scheme provider and the client.

Resources are closely connected to schemes and are sometimes intertwined with schemes. The difference between schemes and resources is subtle but important.

Resource operations

A resource can be defined as a data type with the following methods defined on it:

  1. read - read N bytes to a buffer provided as argument. Defaults to EBADF
  2. write - write a buffer to the resource. Defaults to EBADF.
  3. seek - seek the resource. That is, move the "cursor" without writing. Many resources do not support this operation. Defaults to EBADF.
  4. close - close the resource, potentially releasing a lock. Defaults to EBADF.

TODO add F-operations.

The resource type

There are two types of resources:

  1. File-like resources. These behave a lot like files. They act in a blocking manner; reads and writes are "buffer-like".
  2. Socket-like resources. These behave like sockets. They act in a non-blocking manner; reads and writes are more "stream-like".

TODO Expand on this.

Stiching it All Together

The "URL, scheme, resource" model is simply a unified interface for efficient inter-process communication. URLs are simply resource descriptors. Schemes are simply resource "entries", which can be opened. You can think of a scheme as a closed book. It cannot itself be read or written, but you can open it to an open book: a resource. Resources are simply primitives for communications. They can behave either socket-like (as a stream of bytes, e.g. TCP and Orbital) or file-like (as an on-demand byte buffer, e.g. file systems and stdin).

A quick, ugly diagram would look like this:

             /
             |                                                          +=========+
             |                                                          | Program |
             |                                                          +=========+
             |               +--------------------------------------+      ^   | write
             |               |                                      |      |   |
  User space <  +----- URL -----+                                   | read |   v
             |  | +-----------+ |       open    +---------+  open   |   +----------+
             |  | |  Scheme   |-|---+  +------->| Scheme  |------------>| Resource |
             |  | +-----------+ |   |  |        +---------+             +----------+
             |  | +-----------+ |   |  |
             |  | | Reference | |   |  |
             |  | +-----------+ |   |  |
             \  +---------------+   |  |
                            resolve |  |
             /                      v  |
             |                 +=========+
Kernel space <                 | Resolve |
             |                 +=========+
             \

TODO

"Everything is a URL"

"Everything is a URL" is an important principle in the design of Redox. Roughly speaking it means that the API, design, and ecosystem is centered around URLs, schemes, and resources as the main communication primitive. Applications communicate with each other, the system, daemons, etc, using URLs. As such, programs do not have to create their own constructs for communication.

By unifying the API in this way, you are able to have a consistent, clean, and flexible interface.

We can't really claim credits of this concept (beyond our exact design and implementation). The idea is not a new one and is very similar to 9P from Plan 9 by Bell Labs; a similar approach has been taken in Unix and its successors.

How it differs from "Everything is a file"

With "Everything is a file" all sorts of devices, processes, and kernel parameters can be accessed as files in a regular filesystem. This leads to absurd situations like the hard disk containing the root filesystem / contains a folder named dev with device files including sda which contains the root filesystem. Situations like this are missing any logic. Furthermore many file properties don't make sense on these 'special files': What is the size of /dev/null or a configuration option in sysfs?

In contrast to "Everything is a file", Redox does not enforce a common tree node for all kinds of resources. Instead resources are distinguished by protocol. This way USB devices don't end up in a "filesystem", but a protocol-based scheme like EHCI. Real files are accessible through a scheme called file, which is widely used and specified in RFC 1630 and RFC 1738.

An example.

Enough theory! Time for an example.

We will implement a scheme which holds a vector. The scheme will push elements to the vector when it receives writes, and pop them when it is read. Let's call it vec:.

The complete source for this example can be found at redox-os/vec_scheme_example.

Setup

In order to build and run this example in a Redox environment, you'll need to be set up to compile the OS from source. The process for getting a program included in a local Redox build is laid out in chapter 5. Pause here and follow that guide if you want to get this example running.

This example assumes that vec was used as the name of the crate instead of helloworld. The crate should therefore be located at cookbook/recipes/vec/source.

Modify the Cargo.toml for the vec crate so that it looks something like this:

[package]
name = "vec"
version = "0.1.0"
edition = "2018"

[[bin]]
name = "vec_scheme"
path = "src/scheme.rs"

[[bin]]
name = "vec"
path = "src/client.rs"

[dependencies]
redox_syscall = "^0.2.6"

Notice that there are two binaries here. We'll need another program to interact with our scheme, since CLI tools like cat use more operations than we strictly need to implement for our scheme. The client uses only the standard library.

The Scheme Daemon

Create src/scheme.rs in the crate. Start by useing a couple of symbols.


#![allow(unused)]
fn main() {
use std::cmp::min;
use std::fs::File;
use std::io::{Read, Write};

use syscall::Packet;
use syscall::scheme::SchemeMut;
use syscall::error::Result;
}

We start by defining our mutable scheme struct, which will implement the SchemeMut trait and hold the state of the scheme.


#![allow(unused)]
fn main() {
struct VecScheme {
    vec: Vec<u8>,
}

impl VecScheme {
    fn new() -> VecScheme {
        VecScheme {
            vec: Vec::new(),
        }
    }
}
}

Before implementing the scheme operations on our scheme struct, let's breifly discuss the way that this struct will be used. Our program (vec_scheme) will create the vec scheme by opening the corresponding scheme handler in the root scheme (:vec). Let's implement a main() that intializes our scheme struct and registers the new scheme:

fn main() {
    let mut scheme = VecScheme::new();

    let mut handler = File::create(":vec")
        .expect("Failed to create the vec scheme");
}

When other programs open/read/write/etc against our scheme, the Redox kernel will make those requests available to our program via this scheme handler. Our scheme will read that data, handle the requests, and send responses back to the kernel by writing to the scheme handler. The kernel will then pass the results of operations back to the caller.

fn main() {
    // ...

    let mut packet = Packet::default();

    loop {
        // Wait for the kernel to send us requests
        let read_bytes = handler.read(&mut packet)
            .expect("vec: failed to read event from vec scheme handler");

        if read_bytes == 0 {
            // Exit cleanly
            break;
        }

        // Scheme::handle passes off the info from the packet to the individual
        // scheme methods and writes back to it any information returned by
        // those methods.
        scheme.handle(&mut packet);

        handler.write(&packet)
            .expect("vec: failed to write response to vec scheme handler");
    }
}

Now let's deal with the specific operations on our scheme. The scheme.handle(...) call dispatches requests to these methods, so that we don't need to worry about the gory details of the Packet struct.

In most Unix systems (Redox included!), a program needs to open a file before it can do very much with it. Since our scheme is just a "virtual filesystem", programs call open with the path to the "file" they want to interact with when they want to start a conversation with our scheme.

For our vec scheme, let's push whatever path we're given to the vec:


#![allow(unused)]
fn main() {
impl SchemeMut for VecScheme {
    fn open(&mut self, path: &str, _flags: usize, _uid: u32, _gid: u32) -> Result<usize> {
        self.vec.extend_from_slice(path.as_bytes());
        Ok(0)
    }
}
}

Say a program calls open("vec:/hello"). That call will work it's way through the kernel and end up being dispatched to this function through our Scheme::handle call.

The usize that we return here will be passed back to us as the id parameter of the other scheme operations. This way we can keep track of different open files. In this case, we won't make a distinction between two different programs talking to us and simply return zero.

Similarly, when a process opens a file, the kernel returns a number (the file descriptor) that the process can use to read and write to that file. Now let's implement the read and write operations for VecScheme.


#![allow(unused)]
fn main() {
impl SchemeMut for VecScheme {
    // ...

    // Fill up buf with the contents of self.vec, starting from self.buf[0].
    // Note that this reverses the contents of the Vec.
    fn read(&mut self, _id: usize, buf: &mut [u8]) -> Result<usize> {
        let num_written = min(buf.len(), self.vec.len());

        for b in buf {
            if let Some(x) = self.vec.pop() {
                *b = x;
            } else {
                break;
            }
        }

        Ok(num_written)
    }

    // Simply push any bytes we are given to self.vec
    fn write(&mut self, _id: usize, buf: &[u8]) -> Result<usize> {
        for i in buf {
            self.vec.push(*i);
        }

        Ok(buf.len())
    }
}
}

Note that each of the methods of the SchemeMut trait provide a default implementation. These will all return errors since they are essentially unimplemented. There's one more method we need to implement in order to prevent errors for users of our scheme:


#![allow(unused)]
fn main() {
impl SchemeMut for VecScheme {
    // ...

    fn close(&mut self, _id: usize) -> Result<usize> {
        Ok(0)
    }
}
}

Most languages' standard libraries call close automatically when a file object is destroyed, and Rust is no exception.

To see all the possitble operations on schemes, check out the API docs.

A Simple Client

As mentioned earlier, we need to create a very simple client in order to use our scheme, since some command line tools (like cat) use operations other than open, read, write, and close. Put this code into src/client.rs:

use std::fs::File;
use std::io::{Read, Write};

fn main() {
    let mut vec_file = File::open("vec:/hi")
        .expect("Failed to open vec file");

    vec_file.write(b" Hello")
        .expect("Failed to write to vec:");

    let mut read_into = String::new();
    vec_file.read_to_string(&mut read_into)
        .expect("Failed to read from vec:");

    println!("{}", read_into); // olleH ih/
}

We simply open some "file" in our scheme, write some bytes to it, read some bytes from it, and then spit those bytes out on stdout. Remember, it doesn't matter what path we use, since all our scheme does is add that path to the vec. In this sense, the vec scheme implements a global vector.

Running the Scheme

Since we've already set up the program to build and run in the redox VM, simply run:

  • touch filesystem.toml
  • make qemu

We'll need multiple terminal windows open in the QEMU window for this step. Notice that both binaries we defined in our Cargo.toml can now be found in file:/bin (vec_scheme and vec). In one terminal window, run sudo vec_scheme. A program needs to run as root in order to register a new scheme. In another terminal, run vec and observe the output.

Exercises for the reader

  • Make the vec scheme print out something whenever it gets events. For example, print out the user and group ids of the user who tries to open a file in the scheme.
  • Create a unique vec for each opened file in your scheme. You might find a hashmap useful for this.
  • Write a scheme that can run code for your favorite esoteric programming language.

The kernel of Redox

The Redox kernel largely derives from the concept of microkernels, with particular inspiration from MINIX. This chapter will discuss the design of the Redox kernel.

Microkernels

Redox's kernel is a microkernel. Microkernels stand out in their design by providing minimal abstractions in kernel-space. Microkernels have an emphasis on user space, unlike Monolithic kernels which have an emphasis on kernel space.

The basic philosophy of microkernels is that any component which can run in user space should run in user space. Kernel-space should only be utilized for the most essential components (e.g., system calls, process separation, resource management, IPC, thread management, etc).

The kernel's main task is to act as a medium for communication and segregation of processes. The kernel should provide minimal abstraction over the hardware (that is, drivers, which can and should run in user mode).

Microkernels are more secure and less prone to crashes than monolithic kernels. This is due to drivers and other abstraction being less privileged, and thus cannot do damage to the system. Furthermore, microkernels are extremely maintainable, due to their small code size, this can potentially reduce the number of bugs in the kernel.

As anything else, microkernels do also have disadvantages. We will discuss these later.

Versus monolithic kernels

Monolithic kernels provide a lot more abstractions than microkernels.

An illustration

The above illustration ([from Wikimedia], by Wooptoo, License: Public domain) shows how they differ.

TODO

A note on the current state

Redox has less than 9,000 lines of kernel code. For comparison Minix has ~6,000 lines of kernel code.

We would like to move parts of Redox to user space to get an even smaller kernel.

Advantages of microkernels

There are quite a lot of advantages (and disadvantages!) to microkernels, a few of which will be covered here.

Modularity and customizability

Monolithic kernels are, well, monolithic. They do not allow as fine-grained control as microkernels. This is due to many essential components being "hard-coded" into the kernel, and thus requiring modifications to the kernel itself (e.g., device drivers).

Microkernels are very modular by nature. You can replace, reload, modify, change, and remove modules, on runtime, without even touching the kernel.

Modern monolithic kernels try to solve this issue using kernel modules but still often require the system to reboot.

Security

Microkernels are undoubtedly more secure than monolithic kernels. The minimality principle of microkernels is a direct consequence of the principle of least privilege, according to which all components should have only the privileges absolutely needed to provide the needed functionality.

Many security-critical bugs in monolithic kernels stem from services and drivers running unrestricted in kernel mode, without any form of protection.

In other words: in monolithic kernels, drivers can do whatever, without restrictions, when running in ring 0.

Fewer crashes

When compared to microkernels, Monolithic kernels tend to be crash-prone. A crashed driver in a Monolithic kernel can crash the whole system whereas with a microkernel there is a separation of concerns which allows the system to handle any crash safely.

In Linux we often see errors with drivers dereferencing bad pointers which ultimately results in kernel panics.

There is very good documentation in MINIX about how this can be addressed by a microkernel.

Disadvantages of microkernels

Performance

Any modern operating system needs basic security mechanisms such as virtualization and segmentation of memory. Furthermore any process (including the kernel) has its own stack and variables stored in registers. On context switch, that is each time a syscall is invoked or any other inter-process communication (IPC) is done, some tasks have to be done, including:

  • Saving caller registers, especially the program counter (caller: process invoking syscall or IPC)
  • Reprogramming the MMU's page table (aka TLB)
  • Putting CPU in another mode (kernel mode, user mode)
  • Restoring callee registers (callee: process invoked by syscall or IPC)

These are not inherently slower on microkernels, but microkernels suffer from having to perform these operations more frequently. Many of the system functionality is performed by user space processes, requiring additional context switches.

The performance difference between monolithic and microkernels has been marginalized over time, making their performance comparable. This is partly due to a smaller surface area which can be easier to optimize.

Unfortunately, Redox isn't quite there yet. We still have a relatively slow kernel since not much time has been spent on optimizing it.

Scheduling on Redox

The Redox kernel uses a scheduling algorithm called Round Robin Scheduling.

The kernel registers a function called an interrupt handler that the CPU calls periodically. This function keeps track of how many times it is called, and will schedule the next process ready for scheduling every 10 "ticks".

Memory management

TODO

Drivers

TODO

Programs and Libraries

Redox can run programs. Some programs are interpreted by a runtime for the program's language, such as a script running in the Ion shell or a Python program. Others are compiled into machine instructions that run on a particular operating system (Redox) and specific hardware (e.g. x86 compatible CPU in 64-bit mode).

  • In Redox compiled binaries use the standard ELF ("Executable and Linkable Format") format.

Programs could directly invoke Redox syscalls, but most call library functions that are higher-level and more comfortable to use. You link your program with the libraries it needs.

  • Redox does not support dynamic-link libraries yet (issue #927), so the libraries that a program uses are statically linked into its compiled binary.
  • Most C and C++ programs call functions in a C standard library ("libc") such as fopen
    • Redox includes a port of the newlib Standard C library. This is how programs such as git can run on Redox. newlib has some POSIX compatibility.
  • Rust programs implicitly or explicitly call functions in the Rust standard library (libstd).
    • ?? Redox implements a subset of this in libredox
    • The Rust libstd now includes an implementation of its system-dependent parts (such as file access and setting environment variables) for Redox, in src/libstd/sys/redox. ?? Most of libstd works in Redox, so many command-line Rust programs can be compiled for Redox.

The Redox "cookbook" project includes recipes for compiling C and Rust projects into Redox binaries.

Coreutils

Coreutils is a collection of basic command line utilities included with Redox (or with Linux, BSD, etc.). This includes programs like ls, cp, cat and various other tools necessary for basic command line interaction.

Redox's coreutils aim to be more minimal than, for instance, the GNU coreutils included with most Linux systems.

Supplement utilities

Binutils

Binutils contains utilities for manipulating binary files. Currently Redox's binutils includes strings, disasm, hex, and hexdump

Extrautils

Some additional command line tools are included in extrautils, such as less, grep, and dmesg.

Development in user space

Including a Program in a Redox Build

Redox's cookbook makes packaging a program to include in a build fairly straightforward. This example walks through adding the "hello world" program that cargo new automatically generates to a local build of the operating system.

This process is largely the same for other rust crates and even non-rust programs.

Step One: Setting up the recipe

The cookbook will only build programs that have a recipe defined in cookbook/recipes. To create a recipe for hello world, first create a directory cookbook/recipes/helloworld. Inside this directory create a file recipe.toml and add these lines to it:

[build]
template = "cargo"

The [build] section defines how cookbook should build our project. There are several templates but "cargo" should be used for rust projects.

Cookbook will fetch the sources for a program from a git or tarball URL specified in the [source] section of this file if cookbook/recipes/program_name/source does not exist, and will also fetch updates when running make fetch.

For this example, there is no upstream URL to fetch the sources from, hence no [source] section. Instead, we will simply develop in the source directory.

Step Two: Writing the program

Since this is a hello world example, this step is very straightforward. Simply create cookbook/recipes/helloworld/source. In that directory, run cargo init --name="helloworld".

For cargo projects that already exist, either include a URL to the git repository in the recipe and let cookbook pull the sources automatically during the first build, or simply copy the sources into the source directory.

Step Three: Add the program to the redox build

To be able to access a program from within Redox, it must be added to the filesystem. Open redox/filesystem.toml and find the [packages] table. During the filesystem (re)build, the build system uses cookbook to package all the applicationsin this table, and then installs those packages to the new filesystem. Simply add helloworld = {} anywhere in this table.

[packages]
#acid = {}
#binutils = {}
contain = {}
coreutils = {}
#dash = {}
extrautils = {}
#
# 100+ omitted for brevity
#
pkgutils = {}
ptyd = {}
randd = {}
redoxfs = {}
#rust = {}
smith = {}
#sodium = {}
userutils = {}

# Add this line:
helloworld = {}

In order to rebuild the filesystem image to reflect changes in the source directory, it is nessesary to run touch filesystem.toml before running make.

Step Three: Running your program

Go up to your redox/ directory and run make all. Once the rebuild is finished, run make qemu, log in to Redox, open the terminal, and run helloworld. It should print

Hello, world!

Note that the helloworld binary can be found in file:/bin in the VM (ls file:/bin).

Ion shell

Ion is the underlying library for shells and command execution in Redox, as well as the default shell.

What Ion Is

1. The default shell in Redox

What is a shell?

A shell is a layer around operating system kernel and libraries, that allows users to interact with operating system. That means a shell can be used on any operating system (Ion runs on both Linux and Redox) or implementation of a standard library as long as the provided API is the same. Shells can either be graphical (GUI) or command-line (CLI).

Text shells

Text shells are programs that provide interactive user interface with an operating system. A shell reads from users as they type and performs operations according to the input. This is similar to read-eval-print loop (REPL) found in many programming languages (e.g. Python).

Typical *nix shells

Probably the most famous shell is Bash, which can be found in vast majority of Linux distributions, and also in macOS (formerly known as Mac OS X). On the other hand, FreeBSD uses tcsh by default.

There are many more shell implementations, but these two form the base of two fundamentally different sets:

  • Bourne shell syntax (bash, sh, zsh)
  • C shell syntax (csh, tcsh)

Of course these two groups are not exhaustive; it is worth mentioning at least the fish shell and xonsh. These shells are trying to abandon some features of old-school shell to make the language safer and more sane.

Fancy features

Writing commands without any help from the shell would be very exhausting and impossible to use for everyday work. Therefore, most shells (including Ion of course!) include features such as command history, autocompletion based on history or man pages, shortcuts to speed-up typing, etc.

2. A scripting language

Ion can also be used to write simple scripts for common tasks or system configuration after startup. It is not meant as a fully-featured programming language, but more like a glue to connect other programs together.

Relation to terminals

Early terminals were devices used to communicate with large computer systems like IBM mainframes. Nowadays Unix operating systems usually implement so called virtual terminals (tty stands for teletypewriter ... whoa!) and terminal emulators (e.g. xterm, gnome-terminal).

Terminals are used to read input from a keyboard and display textual output of the shell and other programs running inside it. This means that a terminal converts key strokes into control codes that are further used by the shell. The shell provides the user with a command line prompt (for instance: user name and working directory), line editing capabilities (Ctrl + a,e,u,k...), history, and the ability to run other programs (ls, uname, vim, etc.) according to user's input.

TODO: In Linux we have device files like /dev/tty, how is this concept handled in Redox?

Ion Manual

Ion has it's own manual, which you can find here.

Contributing

So you'd like to contribute to make Redox better! Excellent. This chapter can help you to do just that.

Communication

Chat

The quickest and most open way to communicate with the Redox team is on our Mattermost chat server. Currently, the only way to join it is by sending an email to info@redox-os.org, which might take a little while, since it's not automated. We're currently working on an easier way to do this, but this is the most convenient way right now.

Reddit

You can find Redox on Reddit in /r/rust/ and /r/redox/. The weekly update news is posted on the former.

Direct contributions

Low-Hanging Fruit

aka Easy Targets for Newbies, Quick Wins for Pros

If you're not fluent in Rust:

  • Writing documentation
  • Using/testing Redox, filing issues for bugs and needed features
  • Web development (Redox website, separate repo)
  • Writing unit tests (may require minimal knowledge of rust)

If you are fluent in Rust, but not OS Development:

  • Apps development
  • Shell (Ion) development
  • Package manager (pkgutils) development
  • Other high-level code tasks

If you are fluent in Rust, and have experience with OS Dev:

  • Familiarize yourself with the repository and codebase
  • Grep for TODO, FIXME, BUG, UNOPTIMIZED, REWRITEME, DOCME, and PRETTYFYME and fix the code you find.
  • Improve and optimize code, especially in the kernel

GitLab Issues

GitLab issues are a somewhat formal way to communicate with fellow Redox devs, but a little less quick and convenient than the chat. Issues are a good way to discuss specific topics, but if you want a quick response, using the chat is probably better.

If you haven't requested to join the chat yet, you should (if at all interested in contributing)!

Creating a Proper Bug Report

  1. Make sure the code you are seeing the issue with is up to date with upstream/master. This helps to weed out reports for bugs that have already been addressed.
  2. Make sure the issue is reproducible (trigger it several times). If the issue happens inconsistently, it may still be worth filing a bug report for it, but indicate approximately how often the bug occurs
  3. Record build information like:
  • The rust toolchain you used to build Redox
    • rustc -V and/or rustup show from your Redox project folder
  • The commit hash of the code you used
    • git rev-parse HEAD
  • The environment you are running Redox in (the "target")
    • qemu-system-x86_64 -version or your actual hardware specs, if applicable
  • The operating system you used to build Redox
    • uname -a or an alternative format
  1. Make sure that your bug doesn't already have an issue on GitLab. If you submit a duplicate, you should accept that you may be ridiculed for it, though you'll still be helped. Feel free to ask in the Redox chat if you're uncertain as to whether your issue is new
  2. Create a GitLab issue following the template
    • Non-bug report issues may ignore this template
  3. Watch the issue and be available for questions
  4. Success!

With the help of fellow Redoxers, legitimate bugs can be kept to a low simmer, not a boil.

Pull Requests

It's completely fine to just submit a small pull request without first making an issue, but if it's a big change that will require a lot of planning and reviewing, it's best you start with writing an issue first. Also see the git guidelines.

Creating a Proper Pull Request

The steps given below are for the main Redox project - submodules and other projects may vary, though most of the approach is the same.

  1. Clone the original repository to your local PC using one of the following commands based on the protocol you are using:
    • HTTPS: git clone https://gitlab.redox-os.org/redox-os/redox.git --origin upstream --recursive
    • SSH: git clone git@gitlab.redox-os.org:redox-os/redox.git --origin upstream --recursive
    • Use HTTPS if you don't know which one to use. (Recommended: learn about SSH if you don't want to have to login every time you push/pull!)
  2. Then rebase to ensure you're using the latest changes: git rebase upstream master
  3. Fork the repository
  4. Add your fork to your list of git remotes with
    • HTTPS: git remote add origin https://gitlab.redox-os.org/your-username/redox.git
    • SSH: git remote add origin git@gitlab.redox-os.org:your-username/redox.git
  5. Alternatively, if you already have a fork and copy of the repo, you can simply check to make sure you're up-to-date
    • Fetch the upstream:git fetch upstream master
    • Rebase with local commits:git rebase upstream/master
    • Update the submodules:git submodule update --recursive --init
  6. Optionally create a separate branch (recommended if you're making multiple changes simultaneously) (git checkout -b my-branch)
  7. Make changes
  8. Commit (git add . --all; git commit -m "my commit")
  9. Optionally run rustfmt on the files you changed and commit again if it did anything (check with git diff first)
  10. Test your changes with make qemu or make virtualbox (you might have to use make qemu kvm=no, formerly make qemu_no_kvm) (see Best Practices and Guidelines)
  11. Pull from upstream (git fetch upstream; git rebase upstream/master) (Note: try not to use git pull, it is equivalent to doing git fetch upstream; git merge master upstream/master, which is not usually preferred for local/fork repositories, although it is fine in some cases.)
  12. Repeat step 9 to make sure the rebase still builds and starts
  13. Push to your fork (git push origin my-branch)
  14. Create a pull request, following the template
  15. Describe your changes
  16. Submit!

Community

Community outreach is an important part of Redox's success. If more people know about Redox, then more contributors are likely to step in, and everyone can benefit from their added expertise. You can make a difference by writing articles, talking to fellow OS enthusiasts, or looking for communities that would be interested in knowing more about Redox.

Best Practices and Guidelines

These are a set of best practices to keep in mind when making a contribution to Redox. As always, rules are made to be broken, but these rules in particular play a part in deciding whether to merge your contribution (or not). So do try to follow them.

Rust Style

Since Rust is a relatively small and new language compared to others like C, there's really only one standard. Just follow the official Rust standards for formatting, and maybe run rustfmt on your changes, until we setup the CI system to do it automatically.

Rusting Properly

Some general guidelines:

  • Use std::mem::replace and std::mem::swap when you can.
  • Use .into() and .to_owned() over .to_string().
  • Prefer passing references to the data over owned data. (Don't take String, take &str. Don't take Vec<T> take &[T]).
  • Use generics, traits, and other abstractions Rust provides.
  • Avoid using lossy conversions (for example: don't do my_u32 as u16 == my_u16, prefer my_u32 == my_u16 as u32).
  • Prefer in place (box keyword) when doing heap allocations.
  • Prefer platform independently sized integer over pointer sized integer (u32 over usize, for example).
  • Follow the usual idioms of programming, such as "composition over inheritance", "let your program be divided in smaller pieces", and "resource acquisition is initialization".
  • When unsafe is unnecessary, don't use it. 10 lines longer safe code is better than more compact unsafe code!
  • Be sure to mark parts that need work with TODO, FIXME, BUG, UNOPTIMIZED, REWRITEME, DOCME, and PRETTYFYME.
  • Use the compiler hint attributes, such as #[inline], #[cold], etc. when it makes sense to do so.
  • Try to banish unwrap() and expect() from your code in order to manage errors properly. Panicking must indicate a bug in the program (not an error you didn't want to manage). If you cannot recover from an error, print a nice error to stderr and exit. Check Rust's book about unwrapping.

Avoiding Kernel Panics

When trying to access a slice, always use the common::GetSlice trait and the .get_slice() method to get a slice without causing the kernel to panic.

The problem with slicing in regular Rust, e.g. foo[a..b], is that if someone tries to access with a range that is out of bounds of an array/string/slice, it will cause a panic at runtime, as a safety measure. Same thing when accessing an element.

Always use foo.get(n) instead of foo[n] and try to cover for the possibility of Option::None. Doing the regular way may work fine for applications, but never in the kernel. No possible panics should ever exist in kernel space, because then the whole OS would just stop working.

Git Guidelines

  • Commit messages should describe their changes in present-tense, e.g. "Add stuff to file.ext" instead of "added stuff to file.ext".
  • Try to remove useless duplicate/merge commits from PRs as these clutter up history, and may make it hard to read.
  • Usually, when syncing your local copy with the master branch, you will want to rebase instead of merge. This is because it will create duplicate commits that don't actually do anything when merged into the master branch.
  • When you start to make changes, you will want to create a separate branch, and keep the master branch of your fork identical to the main repository, so that you can compare your changes with the main branch and test out a more stable build if you need to.
  • You should have a fork of the repository on GitLab and a local copy on your computer. The local copy should have two remotes; upstream and origin, upstream should be set to the main repository and origin should be your fork.