Welcome!

This is the Redox book, which will go through (almost) everything about Redox: design, philosophy, how it works, how you can contribute, how to deploy Redox, and much more.

Please notice that this book is currently being written.

This book was written by Ticki with the help of LazyOxen, Steve Klabnik, ElijahCaine, and Jackpot51.

If you want to skip straight to trying out Redox, see getting started.

What Redox is

Redox is a general purpose operating system and surrounding ecosystem written in pure Rust. Our aim is to provide a fully functioning Unix-like microkernel, that is both secure and free.

We have modest compatibility with POSIX, allowing Redox to run many programs without porting.

We take inspiration from Plan9, Minix, Linux, and BSD. We are trying to generalize various concepts from other systems, to get one unified design. We will speak about this some more in the Design chapter.

At this time, Redox supports:

  • All x86-64 CPUs.
  • Graphics cards with VBE support (all Nvidia, Intel, and AMD cards from the past decade have this).
  • AHCI disks.
  • E1000 or RTL8168 network cards.
  • Intel HDA audio controllers.
  • Mouse and keyboard with PS/2 emulation.

This book is broken into 9 parts:

  • Overview: A quick'n'dirty overview of Redox.
  • Introduction: Explanation of what Redox is and how it compares to other systems.
  • Getting started: Compiling and running Redox.
  • The design: An in-depth introduction to the design and implementation of Redox.
  • Development in user space: Writing applications for Redox.
  • Contributing: How you can contribute to Redox.
  • Understanding the codebase: For familiarizing yourself with the codebase.
  • Fun: Top secret chapter.
  • The future: What Redox aims to be.

It is written such that you do not need any prior knowledge in Rust and/or OS development.

The Redox community

We are quite a few developers at Redox. 40+ people work on it. There are all sorts of cool people to work with.

The "core team" (people who are members of the GitLab organization) is currently:

(alphabetically sorted)

Some of those developers maintain several projects. If you are looking to contribute to a project, you can contact the developer who is responsible that maintain the project using this list.

But don't. forget. all. the. other. awesome. contributors.

Side projects

Redox is a complete Rust operating system. In addition to the kernel, we are developing several side projects, including:

  • TFS: A file system inspired by ZFS.
  • Ion: The Redox shell.
  • Orbital: The display server of Redox.
  • OrbTK: A widget toolkit.
  • pkgutils: Redox's package management library and its command-line frontend.
  • Sodium: A Vi-like editor.
  • ralloc: A memory allocator.
  • libextra: Supplement for libstd, used throughout the Redox code base.
  • games-for-redox: A collection of mini-games for Redox (alike BSD-games).
  • and a few other exciting projects you can explore here.

We also have three utility distributions, which are collections of small, useful command-line programs:

  • Coreutils: A minimal set of utilities essential for a usable system.
  • Extrautils: Extra utilities such as reminders, calendars, spellcheck, and so on.
  • Binutils: Utilities for working with binary files.

We also actively contribute to third party projects that are heavily used in Redox.

What tools are fitting for the Redox distribution?

Some of these tools will in the future be moved out of the default distribution, into seperate optional packages. Examples of these are Orbital, OrbTK, Sodium, and so on.

The listed tools fall into three categories:

  1. Critical, which are needed for a full functioning and usable system.
  2. Ecosystem-friendly, which are there for establishing consistency within the ecosystem.
  3. Fun, which are "nice" to have and are inherently simple.

The first category should be obvious: an OS without certain core tools is a useless OS. The second category contains the tools which are likely to be non-default in the future, but nonetheless are in the official distribution right now, for the charm. The third category is there for convenience: namely for making sure that the Redox infrastructure is consistent and integrated (e.g., pkgutils, OrbTK, and libextra).

It is important to note we seek to avoid non-Rust tools, for safety and consistency (see Why Rust).

Maintainers

Currently, jackpot51 maintains:

ticki maintains:

mmstick maintains the Ion Shell.

Introduction

What is Redox?

You might still have the question: What is Redox actually?

Redox is an attempt to make a complete, fully-functioning, general-purpose operating system with a focus on safety, freedom, reliability, correctness, and pragmatism.

The goals of Redox

We want to be able to use it, without obstructions, as a alternative to Linux on our computers. It should be able to run most Linux programs with only minimal modifications. (see Why Free Software)

We're aiming towards a complete, safe Rust ecosystem. This is a design choice, which hopefully improves correctness and security (see Why Rust).

We want to improve the security design when compared to other Unix-like kernels by using safe defaults and disallowing insecure configurations where possible.

The non-goals of Redox

We are not a Linux clone, or POSIX-compliant, nor are we crazy scientists, who wish to redesign everything. Generally, we stick to well-tested and proven correct designs. If it ain't broken don't fix it.

This means that a large number of standard programs and libraries will be compatible with Redox. Some things that do not align with our design decisions will have to be ported.

The key here is the trade off between correctness and compatibility. Ideally, you should be able achieve both, but unfortunately, you can't always do so.

Why Redox?

A natural question this raises is: Why do we need yet another OS? There are plenty out there already.

The answer is: You don't. No-one needs an OS.

Why not contribute somewhere else? Linux? BSD? MINIX?

Linux

There are numerous other OS's, kernels, whatever that lack for contributors, and are in desperate need of more coders. Many times, this is for a good reason. Failures in management, a lack of money, inspiration, or innovation, or a limited set of applications have all caused projects to dwindle and eventually fail.

All these have numerous short-fallings, vulnerability, and bad design choices. Redox isn't and won't be perfect, but we seek to improve over other OSes.

Take Linux for example:

  • Legacy until infinity: Old syscalls stay around forever, drivers for long-unbuyable hardware stay in the kernel as a mandatory part. While they can be disabled, running them in kernel space is unnecessary, and can be a source of system crashes, security issues, and unexpected bugs.
  • Huge codebase: To contribute, you must find a place to fit in to nearly 25 million lines of code, in just the kernel. This is due to Linux's monolithic architecture.
  • Non-permissive license: Linux is licensed under GPL2, preventing the use of other free software licenses inside of the kernel. More on our use of the MIT X11-style license in Why Free Software.
  • Lack of memory safety: Linux has had numerous issues with memory safety throughout time. C is a fine language, but for such a security critical system, C is difficult to use safely.

BSD

It is no secret that we're more in favor of BSD, than Linux (although most of us are still Linux users, for various reasons). This is because of certain security features that allow the construction of a more reliable system, things like jails and ZFS.

BSD isn't quite there yet:

  • It still has a monolithic kernel. This means that a single buggy driver can crash, hang, or, in the worst case, cause damage to the system.
  • The use of C in the kernel makes it probable to write code with memory safety issues.

MINIX

And what about MINIX? Its microkernel design is a big influence on the Redox project, especially for reasons like reliability. MINIX is the most in line with Redox's philosophy. It has a similar design, and a similar license.

  • Use of C - again, we would like drivers and the kernel to be written in Rust, to improve readability and organization, and to catch more potential safety errors. Compared to monolithic kernels, Minix is actually a very well-written and manageable code base, but it is still prone to memory unsafety bugs, for example. These classes of bugs can unfortunately be quite fatal, due to their unexpected nature.
  • Lack of driver support - MINIX does not work well on real hardware, partly due to having less focus on real hardware.
  • Less focus on "Everything is a File" - MINIX does focus less on "Everything is a File" than various other operating systems, like Plan9. We are particularly focused on this idiom, for creating a more uniform program infrastructure.

The Need for Something New

We have to admit, that we do like the idea of writing something that is our own (Not Invented Here syndrome). There are numerous places in the MINIX 3 source code where we would like to make changes, so many that perhaps a rewrite in Rust makes the most sense.

  • Different VFS model, based on URLs, where a program can control an entire segmented filesystem
  • Different driver model, where drivers interface with filesystems like network: and audio: to provide features
  • Different file system, RedoxFS, with a TFS implementation in progress
  • User space written mostly in Rust
  • Orbital, a new GUI

Why Free Software?

Redox OS will be packaged only with compatible free software, to ensure that the entire default distribution may be inspected, modified, and redistributed. Software that does not allow these features, i.e. proprietary software, is against the goals of security and freedom and will not be endorsed by Redox OS. We therefore comply with the GNU Free System Distribution Guidelines.

To view a list of compatible licenses, please refer to the GNU List of Licenses.

For more information about free software, please view this page.

Free Software is more Secure

Redox OS is predominately MIT X11-style licensed, including all software, documentation, and fonts. There are only a few exceptions to this:

The MIT X11-style license has the following properties:

  • It gives you, the user of the software, complete and unrestrained access to the software, such that you may inspect, modify, and redistribute your changes
    • Inspection Anyone may inspect the software for security vulnerabilities
    • Modification Anyone may modify the software to fix security vulnerabilities
    • Redistribution Anyone may redistribute the software to patch the security vulnerabilities
  • It is compatible with GPL licenses - Projects licensed as GPL can be distributed with Redox OS
  • It allows for the incorporation of GPL-incompatible free software, such as OpenZFS, which is CDDL licensed
  • The microkernel architecture means that driver maintainers could choose their own free software license to meet their needs

Proprietary Software is not Secure

Consider the following clause, from Microsoft Windows 10's EULA:

c.  Restrictions. The manufacturer or installer and Microsoft reserve all
    rights (such as rights under intellectual property laws) not expressly
    granted in this agreement. For example, this license does not give you
    any right to, and you may not:

(i)     use or virtualize features of the software separately;

(ii)    publish, copy (other than the permitted backup copy), rent, lease, or
        lend the software;

(iii)   transfer the software (except as permitted by this agreement);

(iv)    work around any technical restrictions or limitations in the software;

(v)     use the software as server software, for commercial hosting, make the
        software available for simultaneous use by multiple users over a
        network, install the software on a server and allow users to access it
        remotely, or install the software on a device for use only by remote
        users;

(vi)    reverse engineer, decompile, or disassemble the software, or attempt to
        do so, except and only to the extent that the foregoing restriction is
        permitted by applicable law or by licensing terms governing the use of
        open-source components that may be included with the software; and

(vii)   when using Internet-based features you may not use those features in any
        way that could interfere with anyone else’s use of them, or to try to
        gain access to or use any service, data, account, or network, in an
        unauthorized manner.

These are clauses typical of proprietary software licenses, but disallowed in free software licenses. These clauses makes it possible for Microsoft to sue and seek damages from individuals who attempt to inspect, modify, or redistribute the software that they have installed. Redox OS abhors such limitations on your freedom. As Redox OS focuses on security, keep in mind the following:

  • Inspection Software that cannot be legally studied, cannot have security flaws identified by the community. Crackers will take advantage of this, as they have no problem breaking the law, and will identify security flaws and utilize them for their own gains.
  • Modification Software that cannot be legally changed, cannot have security flaws fixed by the community. Again, this will lead to identified security flaws being left unfixed for long periods of time.
  • Distribution Software that cannot be legally distributed, cannot have security flaws patched by the community. This will lead to a number of vulnerable installations, even after an identified security flaw has been fixed.

Why Rust?

Why write an operating system in Rust? Why even write in Rust?

Rust has enormous advantages, because for operating systems safety matters. A lot, actually.

Since operating systems are such an integrated part of computing, they are a very security critical component.

There have been numerous bugs and vulnerabilities in Linux, BSD, Glibc, Bash, X, etc. throughout time, simply due to the lack of memory and type safety. Rust does this right, by enforcing memory safety statically.

Design does matter, but so does implementation. Rust attempts to avoid these unexpected memory unsafe conditions (which are a major source of security critical bugs). Design is a very transparent source of issues. You know what is going on, you know what was intended and what was not.

The basic design of the kernel/user space separation is fairly similar to genuine Unix-like systems, at this point. The idea is roughly the same: you separate kernel and user space, through strict enforcement by the kernel, which manages memory and other critical resources.

However, we have an advantage: enforced memory and type safety. This is Rust's strong side, a large number of "unexpected bugs" (for example, undefined behavior) are eliminated.

The design of Linux and BSD is secure. The implementation is not:

Click on the above links. You'll probably notice that many are bugs originating in unsafe conditions (which Rust effectively eliminates) like buffer overflows, not the overall design.

We hope that using Rust will produce a more secure operating system in the end.

TODO Rust doesn't make your code designed correct; that's impossible. However, it is possible to formally prove a design to be sound (like sel4 did), and this is something we're working on.

Unsafes

unsafe is a way to tell Rust that "I know what I'm doing!", which is often necessary when writing low-level code, providing safe abstractions. You cannot write a kernel without unsafes.

In that light, a kernel cannot be 100% safe, however the unsafe parts have to be marked with an unsafe, which keeps the unsafe parts segregated from the safe code. We seek to eliminate the unsafes where we can, and when we use unsafes, we are extremely careful.

A quick grep gives us some stats: the kernel has about 70 invocations of unsafe in about 4500 lines of code overall.

This contrasts with kernels written in C, which cannot make guarantees about safety without costly formal analysis.

You can find out more about how unsafe works in the relevant section of the Rust book.

How Redox compares to other operating systems

We share quite a lot with quite a lot of other operating systems.

Syscalls

The syscall interface is very Unix-y. For example, we have open, pipe, pipe2, lseek, read, write, brk, execv, and so on. Currently, we support the 31 most common Linux syscalls.

Compared to Linux, our syscall interface is much more minimal. This is not because of the stage in development, but because of a minimalist design.

"Everything is a URL"

This is an generalization of "Everything is a file", largely inspired by Plan 9. In Redox, "resources" (will be explained later) can be both socket-like and file-like, making them fast enough for using them for virtually everything.

This way we get a more unified system API. We will explain this later, in URLs, schemes, and resources.

The kernel

Redox's kernel is a microkernel. The architecture is largely inspired by MINIX.

In contrast to Linux or BSD, Redox has only 16,000 lines of kernel code, a number that is often decreasing. Most services are provided in user space

Having vastly smaller amounts of code in the kernel makes it easier to find and fix bugs/security issues more efficiently. Andrew Tanenbaum (author of MINIX) stated that for every 1,000 lines of properly written code, there is a bug. This means that for a monolithic kernel which could average over 15,000,000 lines of code, there could be at least 15,000 bugs. A micro kernel which usually averages 15,000 lines of code would mean that at least 15 bugs exist.

It should be noted that the extra lines are simply based outside of kernel space, making them less dangerous, not necessarily a smaller number.

The main idea is to have components and drivers that would be inside a monolithic kernel exist in user space and follow the Principle of Least Authority (POLA). This is where every individual component is:

  • Completely isolated in memory and as separate user processes
    • The failure of one component does not crash other components
    • Allows foreign and untrusted code to not expose the entire system
    • Bugs and malware cannot spread to other components
  • Has restricted communication with other components
  • Doesn't have Admin/Super-User privileges
    • Bugs are moved to user space which reduces their power

All of this increases the reliability of the system significantly. This would be useful for mission-critical applications and for users that want minimal issues with their computer systems.

Will Redox replace Linux?

No.

About this Book

This book is written in Markdown, built using mdBook. The source files can be found (and forked) on GitLab at gitlab.redox-os.org/redox-os/book/.

Getting started

Preparing the build has information about setting up your system to compile Redox, which is necessary if you want to contribute to Redox development.

If you aren't (currently) interested in going through the trouble of building Redox, you can download the latest release. See the instructions for running in a virtual machine or running on real hardware.

Trying Redox in a virtual machine

The ISO image is not the prefered way to run Redox in a virtual machine. Currently the ISO image loads the entire hard disk image (including unused space) into memory. In the future, the live disk should be improved so that doesn't happen.

Instead, you want to use the hard disk image, which you can find on the release pages as a .bin.gz file. Download and extract that file.

You can then run it in your prefered emulator; this command will run qemu with various features Redox can use enables:

qemu-system-x86_64 -serial mon:stdio -d cpu_reset -d guest_errors -smp 4 -m 1024 -s -machine q35 -device ich9-intel-hda -device hda-duplex -net nic,model=e1000 -net user -device nec-usb-xhci,id=xhci -device usb-tablet,bus=xhci.0 -enable-kvm -cpu host -drive file=redox_VERSION.bin,format=raw

Change redox_VERSION.bin to the .bin file you just downloaded.

Once the system is fully booted, you will be greeted by the RedoxOS login screen. In order to login, enter the following information:

User = root
password = password

Running Redox on real hardware

Currently, Redox only natively supports booting from a hard disk with no partition table. Therefore, the current ISO image uses a bootloader to load the filesystem into memory and emulates one. This is inefficent and requires a somewhat large amount of memory, which will be fixed once proper support for various things (such as a USB mass storage driver) are implemented.

Despite the awkward way it works, the ISO image is the recomended way to try out Redox on real hardware (in an emulator, a virtual hard drive is better). You can obtain an ISO image either by downloading the latest release, or by building one with make iso from the Redox source tree.

You can create a bootable CD or USB drive from the ISO as with other bootable disk images.

Hardware support is limited at the moment, so your milage may vary. There is no USB HID driver, so a USB keyboard or mouse will not work. There is a PS/2 driver, which works with the keyboards and touchpads in many laptops. For networking, the rtl8168d and e1000d ethernet controllers are currently supported.

Redox isn't currently going to replace your existing OS, but it's a fun thing to try; boot Redox on your computer, and see what works.

Installing the toolchain

The redox toolchain is required in order to compile certain parts of redox. This basically entails installing a patched version of gcc.

Ubuntu and other Debian based systems

To install the toolchain, run the following commands:

# Get the Redox OS APT key
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys AA12E97F0881517F

# Install the APT repository
sudo add-apt-repository 'deb https://static.redox-os.org/toolchain/apt /'

# Update your package lists
sudo apt update

# Install the cross compiler
sudo apt install x86-64-unknown-redox-gcc

Arch Linux

To install the toolchain, run the following commands:

# Clone libc
git clone --recursive https://gitlab.redox-os.org/redox-os/libc.git

# Go to the packages
cd libc/packages/arch

# Start with binutils
cd binutils
makepkg -si

# Then autoconf
cd ../autoconf
makepkg -si

# Then gcc-freestanding
cd ../gcc-freestanding
makepkg -si

# Then newlib
cd ../newlib
makepkg -si

# Finally gcc
cd ../gcc
makepkg -si

Other distros/Mac OS X

To install the toolchain, run the following commands:

# Clone libc
git clone --recursive git@gitlab.redox-os.org:redox-os/libc

# Run the setup script
cd libc
./setup.sh all

# Add the tools to your path
export PATH=$PATH:/path/to/libc/build/prefix/bin

Next steps

Now that we have the build tools and the toolchain, we can prepare our build.

Preparing the build

Woah! You made it so far, all the way to here. Congrats! Now we gotta build Redox.

Using the bootstrap Script

If you're on a Linux or macOS computer, you can just run the bootstrapping script, which does the build preparation for you. Change to the folder where you want the source code to live and run the following command:

$ curl -sf https://gitlab.redox-os.org/redox-os/redox/raw/master/bootstrap.sh -o bootstrap.sh && bash -e bootstrap.sh

This script fetches build dependencies using a package manager for your platform and clones the Redox code from GitLab. It checks whether you might already have a dependency and skips the installation in this case. On some systems this is simply done by checking whether the binary exists and doesn't take into account which version of the program you have. This can lead to build errors if you have old versions already installed. In this case, please install the skipped dependencies manually.

Manual Setup

Cloning the repository

Change to the folder where you want your copy of Redox to be stored and issue the following command:

$ git clone https://gitlab.redox-os.org/redox-os/redox.git --origin upstream --recursive && \
   cd redox && git submodule update --recursive --init

Give it a while. Redox is big.

Installing the build dependencies

I assume you have a package manager, which you know how to use (if not, you have to install the build dependencies even more manually). We need the following deps: make (probably already installed), nasm (the assembler, we use in the build process), qemu (the hardware emulator, we will use. If you want to run Redox on real hardware, you should read the fun chapter):)

Linux Users:

$ [your package manager] install cmake make nasm qemu pkg-config libfuse-dev wget gperf libhtml-parser-perl

MacOS Users using MacPorts:

$ sudo port install make nasm qemu gcc49 pkg-config osxfuse x86_64-elf-gcc

MacOS Users using Homebrew:

$ brew install make nasm qemu gcc49 pkg-config Caskroom/cask/osxfuse
$ brew install redox-os/gcc_cross_compilers/x86_64-elf-gcc

Setting Up Nightly Rust

The following step is not required if you already have a functioning Rust nightly installation. Nightly is required.

We will use rustup to manage our Rust versions:

$ curl https://sh.rustup.rs -sSf | sh

You may need to run rustup to install the recommended nightly version.

There is one more tool we need from Rust to install Redox. It is called Xargo. Xargo allows us to have a custom libstd

$ cargo install xargo

Once it is installed, add its install directory to your path by running the following

$ export PATH=${PATH}:~/.cargo/bin

This line can be added to your shell start-up file, like .bashrc, so that it is automatically set up for you in future.

Next steps

Once this is all set up, we can finally compile Redox.

Compiling Redox

Now we have prepared the build, so naturally we're going to build Redox.

$ make all

Give it a while. Redox is big.

Running Redox

To run Redox, do:

$ make qemu

This should open up a Qemu window, booting to Redox.

If it does not work, try:

$ make qemu kvm=no # we disable KVM

or:

$ make qemu iommu=no

If this doesn't work either, you should go open an issue.

Note

If you encounter any bugs, errors, obstructions, or other annoying things, please report the issue to the Redox repository. Thanks!

Trying Out Redox

Use F2 key to get to a login shell. User user can login without password. For root, the password is password for now. help lists builtin commands for your shell (ion). ls /bin will show a list of applications you can execute.

Use F3 key to switch to a graphical user interface (orbital). Log in with the same username/password combinations as above.

Use the F1 key to get back to kernel output.

Sodium

Sodium is Redox's Vi-like editor. To try it out,

  1. Open the terminal by clicking the icon in the button bar
  2. Type sudo pkg install sodium to install Sodium. You will need network for this.
  3. Type sodium. This should now open up a separate editor window.

A short list of the Sodium defaults:

  • hjkl: Navigation keys.
  • ia: Go to insert/append mode.
  • ;: Go to command-line mode.
  • shift-space: Go to normal mode.

For a more extensive list, write ;help.

Setting a reminder/countdown

To demonstrate the ANSI support, we will play around with fancy reminders.

Open up the terminal emulator. Now, write rem -s 10 -b. This will set a 10 sec. countdown with progress bar.

Playing around with Rusthello

Rusthello is an advanced Reversi AI, made by HenryTheCat. It is highly concurrent, so this proves Redox's multithreading capabilities. It supports various AIs, such as brute forcing, minimax, local optimizations, and hybrid AIs.

Oh, let's try it out!

# install rusthello by typing command
$ sudo pkg install games
# start it with command
$ rusthello

Then you will get prompted for various things, such as difficulty, AI setup, and so on. When this is done, Rusthello interactively starts the battle between you and an AI or an AI and an AI.

Exploring OrbTK

Click the OrbTK demo app in the menu bar. This will open a graphical user interface that demonstrates the different widgets OrbTK currently supports.

Asking questions, giving feedback, or anything else

The quickest and most open way to communicate with the Redox team is on our chat server. Currently, you can only get an invite by sending an email request to info@redox-os.org, which might take a little while, since it's not automated. Simply say you'd like to join the chat. We're working on an better way to do this, but this is the best way right now.

Alternatively you can use our Forum.

Explore

This chapter will be dedicated to exploring every aspect of a running Redox system, in gruesome detail.

We will start with the boot system, continuing to the shell and command-line utilities, moving on to the GUI, all while explaining where things happen, and how to change them.

Redox is meant to be an insanely customizable system, allowing a user to tear it down to a very small command-line distro, or build it up to a full desktop environment with ease.

Boot Process

Bootloader

The first code to be executed is the boot sector in bootloader/${ARCH}/bootsector.asm. This loads the bootloader from the first partition. In Redox, the bootloader finds the kernel and loads it in full at address 0x100000. It also initializes the memory map and the VESA display mode, as these rely on BIOS functions that cannot be accessed easily once control is switched to the kernel.

Kernel

The kernel is entered through the interrupt table, with interrupt 0xFF. This interrupt is only available in the bootloader. By utilizing this method, all kernel entry can be contained to a single function, the kernel function, found in kernel/main.rs, that serves as the entry point in the kernel.bin executable file.

At this stage, the kernel copies the memory map out of low memory, sets up an initial page mapping, allocates the environment object, defined in kernel/env/mod.rs, and begins initializing the drivers and schemes that are embedded in the kernel. This process will print out kernel information such as the following:

Redox 32 bits
  * text=101000:151000 rodata=151000:1A4000
  * data=1A4000:1A5000 bss=1A5000:1A6000
 + PS/2
   + Keyboard
     - Reset FA, AA
     - Set defaults FA
     - Enable streaming FA
   + PS/2 Mouse
     - Reset FA, AA
     - Set defaults FA
     - Enable streaming FA
 + IDE on 0, 0, 0, 0, C120, IRQ: 0
   + Primary on: C120, 1F0, 3F4, IRQ E
     + Master: Status: 58 Serial: QM00001 Firmware: 2.0.0 Model: QEMUHARDDISK 48-bit LBA Size: 128 MB
     + Slave: Status: 0
   + Secondary on: C128, 170, 374, IRQ F
     + Master: Status: 41 Error: 2
     + Slave: Status: 0

After initializing the in-kernel structures, drivers, and schemes, the first userspace process spawned by the kernel is the init process, more specifically the initfs:/bin/init process.

Init

Redox has a multi-staged init process, designed to allow for the loading of disk drivers in a modular and configurable fashion. This is commonly referred to as an init ramdisk.

Ramdisk Init

The ramdisk init has the job of loading the drivers required to access the root filesystem and then transfer control to the userspace init. This is a filesystem that is linked with the kernel and loaded by the bootloader as part of the kernel image. You can see the code associated with the init process in crates/init/main.rs.

The ramdisk init loads, by default, the file /etc/init.rc, which may be found in initfs/etc/init.rc. This file currently has the contents:

echo ############################
echo ##  Redox OS is booting   ##
echo ############################
echo

# Load the filesystem driver
initfs:/bin/redoxfsd disk:/0

# Start the filesystem init
cd file:/
init

As such, it is very easy to modify Redox to load a different filesystem as the root, or to move processes and drivers in and out of the ramdisk.

Filesystem Init

As seen above, the ramdisk init has the job of loading and starting the filesystem init. By default, this will mean that a new init process will be spawned that loads a new configuration file, now in the root filesystem at filesystem/etc/init.rc. This file currently has the contents:

echo ############################
echo ##  Redox OS has booted   ##
echo ##  Press enter to login  ##
echo ############################
echo

# Login process, handles debug console
login

Modifying this file allows for booting directly to the GUI. For example, we could replace login with orbital.

Login

After the init processes have set up drivers and daemons, it is possible for the user to log in to the system. A simple login program is currently used, it's source may be found in crates/login/main.rs

The login program accepts a username, currently any username may be used, prints the /etc/motd file, and then executes sh. The motd file can be configured to print any message, it is at filesystem/etc/motd and currently has the contents:

############################
##  Welcome to Redox OS   ##
##  For GUI: Run orbital  ##
############################

At this point, the user will now be able to access the Shell

Graphical overview

Here is an overview of the initialization process with scheme creation and usage. For simplicity's sake, we do not depict all scheme interaction but at least the major ones.

Redox initialization graph

Shell

The shell used in Redox is ion.

When ion is called without "-c", it starts a main loop, which can be found inside Shell.execute().

        self.print_prompt();
        while let Some(command) = readln() {
            let command = command.trim();
            if !command.is_empty() {
                self.on_command(command, &commands);
            }
            self.update_variables();
            self.print_prompt();
        }

self.print_prompt(); is used to print the shell prompt.

The readln() function is the input reader. The code can be found in crates/ion/src/input_editor.

The documentation about trim() can be found here. If the command is not empty, the on_command method will be called. Then, the shell will update variables, and reprint the prompt.

fn on_command(&mut self, command_string: &str, commands: &HashMap<&str, Command>) {
    self.history.add(command_string.to_string(), &self.variables);

    let mut pipelines = parse(command_string);

    // Execute commands
    for pipeline in pipelines.drain(..) {
        if self.flow_control.collecting_block {
            // TODO move this logic into "end" command
            if pipeline.jobs[0].command == "end" {
                self.flow_control.collecting_block = false;
                let block_jobs: Vec<Pipeline> = self.flow_control
                                               .current_block
                                               .pipelines
                                               .drain(..)
                                               .collect();
                match self.flow_control.current_statement.clone() {
                    Statement::For(ref var, ref vals) => {
                        let variable = var.clone();
                        let values = vals.clone();
                        for value in values {
                            self.variables.set_var(&variable, &value);
                            for pipeline in &block_jobs {
                                self.run_pipeline(&pipeline, commands);
                            }
                        }
                    },
                    Statement::Function(ref name, ref args) => {
                        self.functions.insert(name.clone(), Function { name: name.clone(), pipelines: block_jobs.clone(), args: args.clone() });
                    },
                    _ => {}
                }
                self.flow_control.current_statement = Statement::Default;
            } else {
                self.flow_control.current_block.pipelines.push(pipeline);
            }
        } else {
            if self.flow_control.skipping() && !is_flow_control_command(&pipeline.jobs[0].command) {
                continue;
            }
            self.run_pipeline(&pipeline, commands);
        }
    }
}

First, on_command adds the commands to the shell history with self.history.add(command_string.to_string(), &self.variables);.

Then the script will be parsed. The parser code is in crates/ion/src/peg.rs. The parse will return a set of pipelines, with each pipeline containing a set of jobs. Each job represents a single command with its arguments. You can take a look in crates/ion/src/peg.rs.

pub struct Pipeline {
    pub jobs: Vec<Job>,
    pub stdout: Option<Redirection>,
    pub stdin: Option<Redirection>,
}
pub struct Job {
    pub command: String,
    pub args: Vec<String>,
    pub background: bool,
}

What Happens Next:

  • If the current block is a collecting block (a for loop or a function declaration) and the current command is ended, we close the block:
    • If the block is a for loop we run the loop.
    • If the block is a function declaration we push the function to the functions list.
  • If the current block is a collecting block but the current command is not ended, we add the current command to the block.
  • If the current block is not a collecting block, we simply execute the current command.

The code blocks are defined in crates/ion/src/flow_control.rs.

pub struct CodeBlock {
    pub pipelines: Vec<Pipeline>,
}

The function code can be found in crates/ion/src/functions.rs.

The execution of pipeline content will be executed in run_pipeline().

The Command class inside crates/ion/src/main.rs maps each command with a description and a method to be executed. For example:

commands.insert("cd",
                Command {
                    name: "cd",
                    help: "Change the current directory\n    cd <path>",
                    main: box |args: &[String], shell: &mut Shell| -> i32 {
                        shell.directory_stack.cd(args, &shell.variables)
                    },
                });

cd is described by "Change the current directory\n cd <path>", and when called the method shell.directory_stack.cd(args, &shell.variables) will be used. You can see its code in crates/ion/src/directory_stack.rs.

GUI

The desktop environment in Redox, referred to as Orbital, is provided by a set of programs that run in userspace:

Programs

The following are command-line utilities that provide GUI services

orbital

The orbital display and window manager sets up the orbital: scheme, manages the display, and handles requests for window creation, redraws, and event polling

launcher

The launcher multi-purpose program that scans the applications in the /apps/ directory and provides the following services:

Called Without Arguments

A taskbar that displays icons for each application

Called With Arguments

An application chooser that opens a file in a matching program

  • If one application is found that matches, it will be opened automatically
  • If more than one application is found, a chooser will be shown

Applications

The following are GUI utilities that can be found in the /apps/ directory.

Calculator

A calculator that provides similar functionality to the calc program

Editor

A simple editor that is similar to notepad

File Browser

A file browser that displays icons, names, sizes, and details for files. It uses the launcher command to open files when they are clicked

Image Viewer

A simple image viewer

Pixelcannon

A 3d renderer that can be used for benchmarking the Orbital desktop.

Sodium

A vi-like editor that provides syntax highlighting

Terminal Emulator

An ANSI terminal emulator that launches sh by default.

The design of Redox

This chapter will go over the design of Redox: the kernel, the user space, the ecosystem, the trade-offs and much more.

Components of Redox

Redox is made up of several discrete components.

  • ion - shell
  • TFS/RedoxFS - filesystem
  • kernel
  • drivers
  • orbital - DE/WM/Display Server

Orbital subcomponents

  • orbterm - terminal
  • orbdata - images, fonts, etc.
  • orbaudio - audio
  • orbutils - bunch of applications
  • orblogin - login prompt
  • orbtk - like gtk but orb
  • orbfont - font rendering library
  • orbclient - display client
  • orbimage - image rendering library

Core Applications

  • Sodium - text editor
  • orbutils
    • background
    • browser
    • calculator
    • character map
    • editor
    • file manager
    • launcher
    • viewer

URLs, Schemes, and Resources

This is one of the most important design choices Redox makes. These three essential concepts are very entangled.

What does "Everything is a URL" mean?

"Everything is a URL" is a generalization of "Everything is a file", allowing broader use of this unified interface for schemes.

These can be used for effectively modulating the system in a "non-patchworky" manner.

The term is rather misleading, since a URL is just the identifier of a scheme and a resource descriptor. So in that sense "Everything is a scheme, identified by an URL" is more accurate, but not very catchy.

So, how does it differ from files?

You can think of URLs as segregated virtual file systems, which can be arbitrarily structured (they do not have to be tree-like) and arbitrarily defined by a program. Furthermore, "files" don't have to behave file-like! More on this later.

It opens up a lot of possibilities.

[... TODO]

The idea of virtual file systems is not a new one. If you are on a Linux computer, you should try to cd to /proc, and see what's going on there.

Redox extends this concept to a much more powerful one.

TODO

URLs

The URL itself is a relatively uninteresting (yet very important) notion for the design of Redox. The interesting part is what it represents.

The URL

In short, a URL is an identifier of a resource. It contains two parts:

  1. The scheme part. This represents the "receiver", i.e. what scheme will handle the (F)OPEN call. This can be any arbitrary UTF-8 string, and will often simply be the name of your protocol.

  2. The reference part. This represents the "payload" of the URL, namely what the URL refers to. Consider file, as an example. A URL starting with file: simply has a reference which is a path to a file. The reference can be any arbitrary byte string. The parsing, interpretation, and storage of the reference is left to the scheme. For this reason, it is not required to be a tree-like structure.

So, the string representation of an URL looks like:

[scheme]:[reference]

For example:

file:/path/to/myfile

Note that // is not required, for convenience.

Opening a URL

URLs can be opened, yielding schemes, which can be opened to resources, which can be read, written and (for some resources) seeked (there are more operations which are described later on).

For compatibility reasons, we use a file API similar to the Rust standard library's for opening URLs:

use std::fs::OpenOptions;
use std::io::prelude::*;


fn main() {
    // Let's read from a TCP stream
    let tcp = OpenOptions::new()
                .read(true) // readable
                .write(true) // writable
                .open("tcp:0.0.0.0");
}

TODO: Maybe do something with the tcp stream. Ping-pong?

TODO: The terminology may be somewhat confusing for the reader.

How do URLs work under the hood?

Representation

Since it is impossible to go from user space to ring 0 in a typed manner, we have to use some weakly typed representation (that is, we can't use an enum, unless we want to do transmutations and friends). Therefore, we use a string-like representation when moving to kernel space. This is basically just a raw pointer to a C-like, null-terminated string. To avoid further overhead, we use more efficient representations:

Url<'a>

The first of the three representations is the simplest one. It consists of a struct containing two fat pointers, representing the scheme and the reference respectively.

OwnedUrl

This is a struct containing two Strings (that is, growable, heap-allocated UTF-8 string), being the scheme and the reference respectively.

CowUrl<'a>

This is a Copy-on-Write (CoW) URL, which, when mutated, gets cloned to heap. This way, you get efficient conditional allocation of the URL.

Not much fanciness here.

Opening a URL

Opening URLs happens through the OPEN system call. OPEN takes a C-like, null-terminated string, and two pointer-sized integers, keeping the open flags and the file mode, respectively.

The path argument of OPEN does not have to be an URL. For compatibility reasons, it will default to the file: scheme. If otherwise specified, the scheme will be resolved by the registrar (see The root scheme), and then opened.

TODO

Schemes

Schemes are the natural counter-part to URLs. URLs are opened to schemes, which can then be opened to yield a resource.

Schemes are named such that the kernel is able to uniquely identify them. This name is used in the scheme part of the URL.

Schemes are a generalization of file systems. It should be noted that schemes do not necessarily represent normal files; they are often a "virtual file" (i.e., an abstract unit with certain operations defined on it).

Throughout the whole ecosystem of Redox, schemes are used as the main communication primitive because they are a powerful abstraction. With schemes Redox can have one unified I/O interface.

Schemes can be defined both in user space and in kernel space but when possible user space is preferred.

Kernel Schemes

The kernel provides a small number of schemes in order to support userspace.

Name Description Links
: Root scheme - allows the creation of userspace schemes Docs
debug: Provides access to serial console Docs
event: Allows reading of `Event`s which are registered using fevent Docs
env: Access and modify environmental variables Docs
initfs: Readonly filesystem used for initializing the system Docs
irq: Allows userspace handling of IRQs Docs
pipe: Used internally by the kernel to implement pipe Docs
sys: System information, such as the context list and scheme list Docs

Userspace Schemes

The Redox userspace, starting with initfs:bin/init, will create schemes during initialization. Once the user is able to log in, the following should be established:

Name Daemon Description
disk: ahcid Raw access to disks
display: vesad Screen multiplexing of the display, provides text and graphical screens, used by orbital:
ethernet: ethernetd Raw ethernet frame send/receive, used by ip:
file: redoxfs Root filesystem
ip: ipd Raw IP packet send/receive
network: e1000d
rtl8168d
Link level network send/receive, used by ethernet:
null: nulld Scheme that will discard all writes, and read no bytes
orbital: orbital Windowing system
pty: ptyd Psuedoterminals, used by terminal emulators
rand: randd Psuedo-random number generator
tcp: tcpd TCP sockets
udp: udpd UDP sockets
zero: zerod Scheme that will discard all writes, and always fill read buffers with zeroes

Scheme operations

What makes a scheme a scheme? Scheme operations!

A scheme is just a data structure with certain functions defined on it:

  1. open - open the scheme. open is used for initially starting communication with a scheme; it is an optional method, and will default to returning ENOENT.

  2. mkdir - make a new sub-structure. Note that the name is a little misleading (and it might even be renamed in the future), since in many schemes mkdir won't make a directory, but instead perform some form of substructure creation.

Optional methods include:

  1. unlink - remove a link (that is a binding from one substructure to another).

  2. link - add a link.

The Root Scheme

The root scheme is the kernel scheme which is used for registering and retrieving information about schemes. The root scheme's name is simply an empty string ("").

Registering a Scheme

Registering a scheme is done by opening the name of the scheme with the CREATE flag, in the root scheme.

TODO

Resources

Resources are opened schemes. You can think of them like an established connection between the scheme provider and the client.

Resources are closely connected to schemes and are sometimes intertwined with schemes. The difference between schemes and resources is subtle but important.

Resource operations

A resource can be defined as a data type with the following methods defined on it:

  1. read - read N bytes to a buffer provided as argument. Defaults to EBADF
  2. write - write a buffer to the resource. Defaults to EBADF.
  3. seek - seek the resource. That is, move the "cursor" without writing. Many resources do not support this operation. Defaults to EBADF.
  4. close - close the resource, potentially releasing a lock. Defaults to EBADF.

TODO add F-operations.

The resource type

There are two types of resources:

  1. File-like resources. These behave a lot like files. They act in a blocking manner; reads and writes are "buffer-like".
  2. Socket-like resources. These behave like sockets. They act in a non-blocking manner; reads and writes are more "stream-like".

TODO Expand on this.

Stiching it All Together

The "URL, scheme, resource" model is simply a unified interface for efficient inter-process communication. URLs are simply resource descriptors. Schemes are simply resource "entries", which can be opened. You can think of a scheme as a closed book. It cannot itself be read or written, but you can open it to an open book: a resource. Resources are simply primitives for communications. They can behave either socket-like (as a stream of bytes, e.g. TCP and Orbital) or file-like (as an on-demand byte buffer, e.g. file systems and stdin).

A quick, ugly diagram would look like this:

             /
             |                                                          +=========+
             |                                                          | Program |
             |                                                          +=========+
             |               +--------------------------------------+      ^   | write
             |               |                                      |      |   |
  User space <  +----- URL -----+                                   | read |   v
             |  | +-----------+ |       open    +---------+  open   |   +----------+
             |  | |  Scheme   |-|---+  +------->| Scheme  |------------>| Resource |
             |  | +-----------+ |   |  |        +---------+             +----------+
             |  | +-----------+ |   |  |
             |  | | Reference | |   |  |
             |  | +-----------+ |   |  |
             \  +---------------+   |  |
                            resolve |  |
             /                      v  |
             |                 +=========+
Kernel space <                 | Resolve |
             |                 +=========+
             \

TODO

"Everything is an URL"

"Everything is an URL" is an important principle in the design of Redox. Roughly speaking it means that the API, design, and ecosystem is centered around URLs, schemes, and resources as the main communication primitive. Applications communicate with each other, the system, daemons, etc, using URLs. As such, programs do not have to create their own constructs for communication.

By unifying the API in this way, you are able to have a consistent, clean, and flexible interface.

We can't really claim credits of this concept (beyond our exact design and implementation). The idea is not a new one and is very similar to 9P from Plan 9 by Bell Labs; a similar approach has been taken in Unix and its successors.

How it differs from "Everything is a file"

With "Everything is a file" all sorts of devices, processes, and kernel parameters can be accessed as files in a regular filesystem. This leads to absurd situations like the hard disk containing the root filesystem / contains a folder named dev with device files including sda which contains the root filesystem. Situations like this are missing any logic. Furthermore many file properties don't make sense on these 'special files': What is the size of /dev/null or a configuration option in sysfs?

In contrast to "Everything is a file", Redox does not enforce a common tree node for all kinds of resources. Instead resources are distinguished by protocol. This way USB devices don't end up in a "filesystem", but a protocol-based scheme like EHCI. Real files are accessible through a scheme called file, which is widely used and specified in RFC 1630 and RFC 1738.

An example.

Enough theory! Time for an example.

We will implement a scheme which holds a vector, and push elements when you write, and pop them when you read. Let's call it vec:.

Let's get going:

The code

So first of all we need to import the things, we need:


# #![allow(unused_variables)]
#fn main() {
extern crate syscall; // add "redox_syscall: "*" to your cargo dependencies

use syscall::scheme::SchemeMut;
use syscall::error::{Error, Result, ENOENT, EBADF, EINVAL};

use std::cmp::min;
#}

We start by defining our mutable scheme struct, which will implement the SchemeMut trait and hold the state of the scheme.


# #![allow(unused_variables)]
#fn main() {
struct VecScheme {
    vec: Vec<u8>,
}

impl VecScheme {
    fn new() -> VecScheme {
        VecScheme {
            vec: Vec::new(),
        }
    }
}
#}

First of all we implement, open(). Let it accept a reference, which will be the initial content of the vector.

Note that we do ignore flags, uid and gid.


# #![allow(unused_variables)]
#fn main() {
impl SchemeMut for VecScheme {
    fn open(&mut self, path: &[u8], _flags: usize, _uid: u32, _gid: u32) -> Result<usize> {
        self.vec.extend_from_slice(path);
        Ok(path.len())
    }
#}

So, now we implement read:


# #![allow(unused_variables)]
#fn main() {
    fn read(&mut self, _id: usize, buf: &mut [u8]) -> Result<usize> {
        let res = min(buf.len(), self.vec.len());

        for b in buf {
            *b = if let Some(x) = self.vec.pop() {
                x
            } else {
                break;
            }
        }

        Result::Ok(res)
    }
#}

Now, we will add write, which will simply push to the vector:


# #![allow(unused_variables)]
#fn main() {
    fn write(&mut self, _id: usize, buf: &[u8]) -> Result<usize> {
        for &i in buf {
            self.vec.push(i);
        }

        Result::Ok(buf.len())
    }
}
#}

In both our read and write implementation we ignored the id parameter for simplicity sake.

Note that leaving out the missing methods results in errors, when calling them, because of its default implementation.

Last, we need the main function:

fn main() {
    use syscall::data::Packet;
    use std::fs::File;
    use std::io::{Read, Write};
    use std::mem::size_of;

    let mut scheme = VecScheme::new();
    // Create the handler
    let mut socket = File::create(":vec").unwrap();
    loop {
        let mut packet = Packet::default();
        while socket.read(&mut packet).unwrap() == size_of::<Packet>() {
            scheme.handle(&mut packet);
            socket.write(&packet).unwrap();
        }
    }
}

How to compile and run this example

For the time being there is no easy way to compile and deploy binaries for/on the redox platform.

There is however a way and it goes as follows:

  1. Add your folder (as a soft link) into your local Redox repository (e.g. schemes/);
  2. Add your binary to the root Makefile (e.g. under the schemes target);
  3. Compile Redox via make;

Note: Do not add your folder to the git history, this 3-step process is only meant for local testing purposes. Also make sure to name your folder and binary as specified in your binary's cargo.toml;

Now, go ahead and test it by running make qemu (or a variant)!

Exercises for the reader

  • Compile and run the VecScheme example on the Redox platform;
  • There is also a Scheme trait, which is immutable. Come up with some use cases for this one;
  • Write a scheme that can run Brainfuck code;

The kernel of Redox

The Redox kernel largely derives from the concept of microkernels, with particular inspiration from MINIX. This chapter will discuss the design of the Redox kernel.

Microkernels

Redox's kernel is a microkernel. Microkernels stand out in their design by providing minimal abstractions in kernel-space. Microkernels have an emphasis on user space, unlike Monolithic kernels which have an emphasis on kernel space.

The basic philosophy of microkernels is that any component which can run in user space should run in user space. Kernel-space should only be utilized for the most essential components (e.g., system calls, process separation, resource management, IPC, thread management, etc).

The kernel's main task is to act as a medium for communication and segregation of processes. The kernel should provide minimal abstraction over the hardware (that is, drivers, which can and should run in user mode).

Microkernels are more secure and less prone to crashes than monolithic kernels. This is due to drivers and other abstraction being less privileged, and thus cannot do damage to the system. Furthermore, microkernels are extremely maintainable, due to their small code size, this can potentially reduce the number of bugs in the kernel.

As anything else, microkernels do also have disadvantages. We will discuss these later.

Versus monolithic kernels

Monolithic kernels provide a lot more abstractions than microkernels.

An illustration

The above illustration ([from Wikimedia], by Wooptoo, License: Public domain) shows how they differ.

TODO

A note on the current state

Redox has less than 9,000 lines of kernel code. For comparison Minix has ~6,000 lines of kernel code.

We would like to move parts of Redox to user space to get an even smaller kernel.

Advantages of microkernels

There are quite a lot of advantages (and disadvantages!) to microkernels, a few of which will be covered here.

Modularity and customizability

Monolithic kernels are, well, monolithic. They do not allow as fine-grained control as microkernels. This is due to many essential components being "hard-coded" into the kernel, and thus requiring modifications to the kernel itself (e.g., device drivers).

Microkernels are very modular by nature. You can replace, reload, modify, change, and remove modules, on runtime, without even touching the kernel.

Modern monolithic kernels try to solve this issue using kernel modules but still often require the system to reboot.

Security

Microkernels are undoubtedly more secure than monolithic kernels. The minimality principle of microkernels is a direct consequence of the principle of least privilege, according to which all components should have only the privileges absolutely needed to provide the needed functionality.

Many security-critical bugs in monolithic kernels stem from services and drivers running unrestricted in kernel mode, without any form of protection.

In other words: in monolithic kernels, drivers can do whatever, without restrictions, when running in ring 0.

Fewer crashes

When compared to microkernels, Monolithic kernels tend to be crash-prone. A crashed driver in a Monolithic kernel can crash the whole system whereas with a microkernel there is a separation of concerns which allows the system to handle any crash safely.

In Linux we often see errors with drivers dereferencing bad pointers which ultimately results in kernel panics.

There is very good documentation in MINIX about how this can be addressed by a microkernel.

Disadvantages of microkernels

Performance

Any modern operating system needs basic security mechanisms such as virtualization and segmentation of memory. Furthermore any process (including the kernel) has its own stack and variables stored in registers. On context switch, that is each time a syscall is invoked or any other inter-process communication (IPC) is done, some tasks have to be done, including:

  • Saving caller registers, especially the program counter (caller: process invoking syscall or IPC)
  • Reprogramming the MMU's page table (aka TLB)
  • Putting CPU in another mode (kernel mode, user mode)
  • Restoring callee registers (callee: process invoked by syscall or IPC)

These are not inherently slower on microkernels, but microkernels suffer from having to perform these operations more frequently. Many of the system functionality is performed by user space processes, requiring additional context switches.

The performance difference between monolithic and microkernels has been marginalized over time, making their performance comparable. This is partly due to a smaller surface area which can be easier to optimize.

Unfortunately, Redox isn't quite there yet. We still have a relatively slow kernel since not much time has been spent on optimizing it.

Scheduling

Memory management

Drivers

Programs and Libraries

Redox can run programs. Some programs are interpreted by a runtime for the program's language, such as a script running in the Ion shell or a Python program. Others are compiled into machine instructions that run on a particular operating system (Redox) and specific hardware (e.g. x86 compatible CPU in 64-bit mode).

  • In Redox compiled binaries use the standard ELF ("Executable and Linkable Format") format.

Programs could directly invoke Redox syscalls, but most call library functions that are higher-level and more comfortable to use. You link your program with the libraries it needs.

  • Redox does not support dynamic-link libraries yet (issue #927), so the libraries that a program uses are statically linked into its compiled binary.
  • Most C and C++ programs call functions in a C standard library ("libc") such as fopen
    • Redox includes a port of the newlib Standard C library. This is how programs such as git can run on Redox. newlib has some POSIX compatibility.
  • Rust programs implicitly or explicitly call functions in the Rust standard library (libstd).
    • ?? ~~Redox implements a subset of this in libredox~~
    • The Rust libstd now includes an implementation of its system-dependent parts (such as file access and setting environment variables) for Redox, in src/libstd/sys/redox. ?? Most of libstd works in Redox, so many command-line Rust programs can be compiled for Redox.

The Redox "cookbook" project includes recipes for compiling C and Rust projects into Redox binaries.

Coreutils

Coreutils is a collection of basic command line utilities included with Redox (or with Linux, BSD, etc.). This includes programs like ls, cp, cat and various other tools necessary for basic command line interaction.

Redox's coreutils aim to be more minimal than, for instance, the GNU coreutils included with most Linux systems.

Supplement utilities

Binutils

Binutils contains utilities for manipulating binary files. Currently Redox's binutils includes strings, disasm, hex, and hexdump

Extrautils

Some additional command line tools are included in extrautils, such as less, grep, and dmesg.

Development in user space

Writing an application for Redox

Compiling your program

Thanks to redox' cookbook, building programs is a snap. In this example, we will be building the helloworld program that cargo new automatically generates.

Step One: Setting up your program

To begin, go to your projects directory, here assumed to be ~/Projects/. Open the terminal, and run cargo new helloworld --bin. For reasons that will become clear later, you must make your program compile from a git repository, so run cd helloworld;git init to make helloworld a git repo, and git status to see which files you need to add to the repo. You should see something like this

On branch master

Initial commit

Untracked files:
  (use "git add <file>..." to include in what will be committed)

    .gitignore
    Cargo.toml
    src/

nothing added to commit but untracked files present (use "git add" to track)

Add all the files by running git add .gitignore Cargo.toml src/ and commit by running git commit -m "Added files to helloworld.".

There is only one more thing that must be done to set up your program. Go to the Cargo.toml of your project and add

[[bin]]
name = "helloworld"
path = "src/main.rs"

to the bottom. Now the program is sufficiently set up, and it is ready to be added to your redox build.

Step Two: Adding your program to your redox build

To be able to access your program from redox, it must be added to both the build process and the filesystem. Adding your program to the filesystem is easy: go to your redox/filesystem.toml file, and look under the [packages] table. It should look like this:

[packages]
#acid = {}
#binutils = {}
contain = {}
coreutils = {}
#dash = {}
extrautils = {}
#games = {}
#gcc = {}
#gnu-binutils = {}
#gnu-make = {}
installer = {}
ion = {}
#lua = {}
netstack = {}
netutils = {}
#newlib = {}
orbdata = {}
orbital = {}
orbterm = {}
orbutils = {}
#pixelcannon = {}
pkgutils = {}
ptyd = {}
randd = {}
redoxfs = {}
#rust = {}
smith = {}
#sodium = {}
userutils = {}

Under userutils = {} add a line for your own program:

userutils = {}
helloworld = {}

Now when building the filesystem, redox will look for a helloworld binary.

With the file system in order, you can now add your program to the build process by adding a recipie. Spoilers: this is the easy part.

Under redox/cookbook/recipes/, make a new directory called helloworld. In helloworld, create a file called recipe.sh.

Remember the git repository ~/Projects/helloworld/? Its about to be relevant. In the file recipe.sh, write

GIT=~/Projects/helloworld/

With that, helloworld will now be built with and accessible from redox.

Step Three: Running your program

Go up to your redox/ directory and run make all. Once it finishes running, run make qemu, log in to redox, open the terminal, and run helloworld. It should print

Hello, world!

Ion shell

Ion is the underlying library for shells and command execution in Redox, as well as the default shell.

What Ion Is

1. The default shell in Redox

What is a shell?

A shell is a layer around operating system kernel and libraries, that allows users to interact with operating system. That means a shell can be used on any operating system (Ion runs on both Linux and Redox) or implementation of a standard library as long as the provided API is the same. Shells can either be graphical (GUI) or command-line (CLI).

Text shells

Text shells are programs that provide interactive user interface with an operating system. A shell reads from users as they type and performs operations according to the input. This is similar to read-eval-print loop (REPL) found in many programming languages (e.g. Python).

Typical *nix shells

Probably the most famous shell is Bash, which can be found in vast majority of Linux distributions, and also in macOS (formerly known as Mac OS X). On the other hand, FreeBSD uses tcsh by default.

There are many more shell implementations, but these two form the base of two fundamentally different sets:

  • Bourne shell syntax (bash, sh, zsh)
  • C shell syntax (csh, tcsh)

Of course these two groups are not exhaustive; it is worth mentioning at least the fish shell and xonsh. These shells are trying to abandon some features of old-school shell to make the language safer and more sane.

Fancy features

Writing commands without any help from the shell would be very exhausting and impossible to use for everyday work. Therefore, most shells (including Ion of course!) include features such as command history, autocompletion based on history or man pages, shortcuts to speed-up typing, etc.

2. A scripting language

Ion can also be used to write simple scripts for common tasks or system configuration after startup. It is not meant as a fully-featured programming language, but more like a glue to connect other programs together.

Relation to terminals

Early terminals were devices used to communicate with large computer systems like IBM mainframes. Nowadays Unix operating systems usually implement so called virtual terminals (tty stands for teletypewriter ... whoa!) and terminal emulators (e.g. xterm, gnome-terminal).

Terminals are used to read input from a keyboard and display textual output of the shell and other programs running inside it. This means that a terminal converts key strokes into control codes that are further used by the shell. The shell provides the user with a command line prompt (for instance: user name and working directory), line editing capabilities (Ctrl + a,e,u,k...), history, and the ability to run other programs (ls, uname, vim, etc.) according to user's input.

TODO: In Linux we have device files like /dev/tty, how is this concept handled in Redox?

Ion Manual

Ion has it's own manual, which you can find here.

Contributing

So you'd like to contribute to make Redox better! Excellent. This chapter can help you to do just that.

Communication

Chat

The quickest and most open way to communicate with the Redox team is on our Mattermost chat server. Currently, the only way to join it is by sending an email to info@redox-os.org, which might take a little while, since it's not automated. We're currently working on an easier way to do this, but this is the most convenient way right now.

Reddit

You can find Redox on Reddit in /r/rust/ and /r/redox/. The weekly update news is posted on the former.

Direct contributions

Low-Hanging Fruit

aka Easy Targets for Newbies, Quick Wins for Pros

If you're not fluent in Rust:

  • Writing documentation
  • Using/testing Redox, filing issues for bugs and needed features
  • Web development (Redox website, separate repo)
  • Writing unit tests (may require minimal knowledge of rust)

If you are fluent in Rust, but not OS Development:

  • Apps development
  • Shell (Ion) development
  • Package manager (pkgutils) development
  • Other high-level code tasks

If you are fluent in Rust, and have experience with OS Dev:

  • Familiarize yourself with the repository and codebase
  • Grep for TODO, FIXME, BUG, UNOPTIMIZED, REWRITEME, DOCME, and PRETTYFYME and fix the code you find.
  • Improve and optimize code, especially in the kernel

GitLab Issues

GitLab issues are a somewhat formal way to communicate with fellow Redox devs, but a little less quick and convenient than the chat. Issues are a good way to discuss specific topics, but if you want a quick response, using the chat is probably better.

If you haven't requested to join the chat yet, you should (if at all interested in contributing)!

Creating a Proper Bug Report

  1. Make sure the code you are seeing the issue with is up to date with upstream/master. This helps to weed out reports for bugs that have already been addressed.
  2. Make sure the issue is reproducible (trigger it several times). If the issue happens inconsistently, it may still be worth filing a bug report for it, but indicate approximately how often the bug occurs
  3. Record build information like:
  • The rust toolchain you used to build Redox
    • rustc -V and/or rustup show from your Redox project folder
  • The commit hash of the code you used
    • git rev-parse HEAD
  • The environment you are running Redox in (the "target")
    • qemu-system-x86_64 -version or your actual hardware specs, if applicable
  • The operating system you used to build Redox
    • uname -a or an alternative format
  1. Make sure that your bug doesn't already have an issue on GitLab. If you submit a duplicate, you should accept that you may be ridiculed for it, though you'll still be helped. Feel free to ask in the Redox chat if you're uncertain as to whether your issue is new
  2. Create a GitLab issue following the template
    • Non-bug report issues may ignore this template
  3. Watch the issue and be available for questions
  4. Success!

With the help of fellow Redoxers, legitimate bugs can be kept to a low simmer, not a boil.

Pull Requests

It's completely fine to just submit a small pull request without first making an issue, but if it's a big change that will require a lot of planning and reviewing, it's best you start with writing an issue first. Also see the git guidelines.

Creating a Proper Pull Request

The steps given below are for the main Redox project - submodules and other projects may vary, though most of the approach is the same.

  1. Clone the original repository to your local PC using one of the following commands based on the protocol you are using:
    • HTTPS: git clone https://gitlab.redox-os.org/redox-os/redox.git --origin upstream --recursive
    • SSH: git clone git@gitlab.redox-os.org:redox-os/redox.git --origin upstream --recursive
    • Use HTTPS if you don't know which one to use. (Recommended: learn about SSH if you don't want to have to login every time you push/pull!)
  2. Then rebase to ensure you're using the latest changes: git rebase upstream master
  3. Fork the repository
  4. Add your fork to your list of git remotes with
    • HTTPS: git remote add origin https://gitlab.redox-os.org/your-username/redox.git
    • SSH: git remote add origin git@gitlab.redox-os.org:your-username/redox.git
  5. Alternatively, if you already have a fork and copy of the repo, you can simply check to make sure you're up-to-date
    • Fetch the upstream:git fetch upstream master
    • Rebase with local commits:git rebase upstream/master
    • Update the submodules:git submodule update --recursive --init
  6. Optionally create a separate branch (recommended if you're making multiple changes simultaneously) (git checkout -b my-branch)
  7. Make changes
  8. Commit (git add . --all; git commit -m "my commit")
  9. Optionally run rustfmt on the files you changed and commit again if it did anything (check with git diff first)
  10. Test your changes with make qemu or make virtualbox (you might have to use make qemu kvm=no, formerly make qemu_no_kvm) (see Best Practices and Guidelines)
  11. Pull from upstream (git fetch upstream; git rebase upstream/master) (Note: try not to use git pull, it is equivalent to doing git fetch upstream; git merge master upstream/master, which is not usually preferred for local/fork repositories, although it is fine in some cases.)
  12. Repeat step 9 to make sure the rebase still builds and starts
  13. Push to your fork (git push origin my-branch)
  14. Create a pull request, following the template
  15. Describe your changes
  16. Submit!

Community

Community outreach is an important part of Redox's success. If more people know about Redox, then more contributors are likely to step in, and everyone can benefit from their added expertise. You can make a difference by writing articles, talking to fellow OS enthusiasts, or looking for communities that would be interested in knowing more about Redox.

Best Practices and Guidelines

These are a set of best practices to keep in mind when making a contribution to Redox. As always, rules are made to be broken, but these rules in particular play a part in deciding whether to merge your contribution (or not). So do try to follow them.

Rust Style

Since Rust is a relatively small and new language compared to others like C, there's really only one standard. Just follow the official Rust standards for formatting, and maybe run rustfmt on your changes, until we setup the CI system to do it automatically.

Rusting Properly

Some general guidelines:

  • Use std::mem::replace and std::mem::swap when you can.
  • Use .into() and .to_owned() over .to_string().
  • Prefer passing references to the data over owned data. (Don't take String, take &str. Don't take Vec<T> take &[T]).
  • Use generics, traits, and other abstractions Rust provides.
  • Avoid using lossy conversions (for example: don't do my_u32 as u16 == my_u16, prefer my_u32 == my_u16 as u32).
  • Prefer in place (box keyword) when doing heap allocations.
  • Prefer platform independently sized integer over pointer sized integer (u32 over usize, for example).
  • Follow the usual idioms of programming, such as "composition over inheritance", "let your program be divided in smaller pieces", and "resource acquisition is initialization".
  • When unsafe is unnecessary, don't use it. 10 lines longer safe code is better than more compact unsafe code!
  • Be sure to mark parts that need work with TODO, FIXME, BUG, UNOPTIMIZED, REWRITEME, DOCME, and PRETTYFYME.
  • Use the compiler hint attributes, such as #[inline], #[cold], etc. when it makes sense to do so.
  • Try to banish unwrap() and expect() from your code in order to manage errors properly. Panicking must indicate a bug in the program (not an error you didn't want to manage). If you cannot recover from an error, print a nice error to stderr and exit. Check Rust's book about unwrapping.

Avoiding Kernel Panics

When trying to access a slice, always use the common::GetSlice trait and the .get_slice() method to get a slice without causing the kernel to panic.

The problem with slicing in regular Rust, e.g. foo[a..b], is that if someone tries to access with a range that is out of bounds of an array/string/slice, it will cause a panic at runtime, as a safety measure. Same thing when accessing an element.

Always use foo.get(n) instead of foo[n] and try to cover for the possibility of Option::None. Doing the regular way may work fine for applications, but never in the kernel. No possible panics should ever exist in kernel space, because then the whole OS would just stop working.

Git Guidelines

  • Commit messages should describe their changes in present-tense, e.g. "Add stuff to file.ext" instead of "added stuff to file.ext".
  • Try to remove useless duplicate/merge commits from PRs as these clutter up history, and may make it hard to read.
  • Usually, when syncing your local copy with the master branch, you will want to rebase instead of merge. This is because it will create duplicate commits that don't actually do anything when merged into the master branch.
  • When you start to make changes, you will want to create a separate branch, and keep the master branch of your fork identical to the main repository, so that you can compare your changes with the main branch and test out a more stable build if you need to.
  • You should have a fork of the repository on GitLab and a local copy on your computer. The local copy should have two remotes; upstream and origin, upstream should be set to the main repository and origin should be your fork.