Introduction
This is the Redox OS book, which will go through (almost) everything about Redox: design, philosophy, how it works, how you can contribute, how to deploy Redox, and much more.
Please note that this book is a work in progress.
If you want to skip straight to trying out Redox, see Getting started.
If you want to contribute to Redox, read these guides: CONTRIBUTING and Developing for Redox.
What is Redox?
Redox OS is a general purpose operating system written in Rust. Our aim is to provide a fully functioning Unix-like microkernel-based operating system, that is secure, reliable and free.
We have modest compatibility with POSIX, allowing Redox to run many programs without porting.
We take inspiration from Plan 9, Minix, seL4, Linux, and BSD. Redox aims to synthesize years of research and hard won experience into a system that feels modern and familiar.
This book is written in a way that you don't need any prior knowledge in Rust and/or OS development.
Introducing Redox OS
Redox OS is microkernel-based operating system, with a large number of supported programs and components, to create a full-featured user and application environment. In this chapter, we will discuss the goals, philosophy and scope of Redox.
Our Goals
Redox is an attempt to make a complete, fully-functioning, general-purpose operating system with a focus on safety, freedom, reliability, correctness, and pragmatism.
We want to be able to use it, without obstructions, as an alternative to Linux/BSD on our computers. It should be able to run most Linux/BSD programs with only minimal modifications.
We're aiming towards a complete, safe Rust ecosystem. This is a design choice, which hopefully improves correctness and security (see Why Rust).
We want to improve the security design when compared to other Unix-like operating systems by using safe defaults and limiting insecure configurations where possible.
The non-goals of Redox
We are not a Linux/BSD clone, or POSIX-compliant, nor are we crazy scientists, who wish to redesign everything. Generally, we stick to well-tested and proven correct designs. If it ain't broken don't fix it.
This means that a large number of programs and libraries will be compatible with Redox. Some things that do not align with our design decisions will have to be ported.
The key here is the trade off between correctness and compatibility. Ideally, you should be able to achieve both, but unfortunately, you can't always do so.
Our Philosophy
Redox OS is predominately MIT X11-style licensed, including all software, documentation, and fonts. There are only a few exceptions to this, which are all licensed under other compatible open-source licenses.
The MIT X11-style license has the following properties:
- It gives you, the user of the software, complete and unrestrained access to the software, such that you may inspect, modify, and redistribute your changes
- Inspection Anyone may inspect the software for security vulnerabilities
- Modification Anyone may modify the software to fix security vulnerabilities
- Redistribution Anyone may redistribute the software to patch the security vulnerabilities
- It is compatible with GPL licenses - Projects licensed as GPL can be distributed with Redox OS
- It allows for the incorporation of GPL-incompatible free software, such as OpenZFS, which is CDDL licensed
The license does not restrict the software that may run on Redox, however -- and thanks to the microkernel architecture, even traditionally tightly-coupled components such as drivers can be distributed separately, so maintainers are free to choose whatever license they like for their projects.
Redox intends to be free forever, because we aim to be a foundational piece in creating secure and resilient systems.
Why a New OS?
The essential goal of the Redox project is to build a robust, reliable and safe general-purpose operating system. To that end, the following key design choices have been made.
Written in Rust
Wherever possible, Redox code is written in Rust. Rust enforces a set of rules and checks on the use, sharing and deallocation of memory references. This almost entirely eliminates the potential for memory leaks, buffer overruns, use after free, and other memory errors that arise during development. The vast majority of security vulnerabilities in operating systems originate from memory errors. The Rust compiler prevents this type of error before the developer attempts to add it to the code base.
Microkernel Architecture
The Microkernel Architecture moves as much components as possible out of the operating system kernel. Drivers, subsystems and other operating system functionality run as independent processes on user-space (daemons). The kernel's main responsibility is the coordination of these processes, and the management of system resources to the processes.
Most kernels, other than some real-time operating systems, use an event-handler design. Hardware interrupts and application system calls, each one trigger an event invoking the appropriate handler. The kernel runs in supervisor-mode, with access to all the system's resources. In Monolithic Kernels, the operating system's entire response to an event must be completed in supervisor mode. An error in the kernel, or even a misbehaving piece of hardware, can cause the system to enter a state where it is unable to respond to any event. And because of the large amount of code in the kernel, the potential for vulnerabilities while in supervisor mode is vastly greater than for a microkernel design.
In Redox, drivers and many system services can run in user-mode, similar to user programs, and the system can restrict them so they can only access the resources they require for their designated purpose. If a driver fails or panics, it can be ignored or restarted with no impact on the rest of the system. A misbehaving piece of hardware might impact system performance or cause the loss of a service, but the kernel will continue to function and to provide whatever services remain available.
Thus Redox is an unique opportunity to show the microkernel potential for the mainstream operating systems universe.
Advanced Filesystem
Redox provides an advanced filesystem, RedoxFS. It includes many of the attributes of ZFS, but in a more modular design.
More details on RedoxFS can be found here
Unix-like Utilities and API
Redox provides a Unix-like command interface, with many everyday utilities written in Rust but with familiar names and options. As well, Redox system services include a programming interface that is a subset of the POSIX API, via relibc. This means that many Linux/POSIX programs can run on Redox with only recompilation. While the Redox team has a strong preference for having essential programs written in Rust, we are agnostic about the programming language for programs of the user's choice. This means an easy migration path for systems and programs previously developed for a Unix-like platform.
Redox Use Cases
Redox is a general-purpose operating system that can be used in many situations. Some of the key use cases for Redox are as follows.
Server
Redox has the potential to be a secure server platform for cloud services and web hosting. The improved safety and reliability that Redox can provide, as it matures, makes it an excellent fit for the server world. Work remains to be done on support for important server technologies such as databases and web servers, as well as compatibility with high-end server hardware.
Redox has plans underway for virtualization support. Although running an instance of Linux in a container on Redox will lose some of the benefits of Redox, it can limit the scope of vulnerabilities. Redox-on-Redox and Linux-on-Redox virtualization have the potential to be much more secure than Linux-on-Linux. These capabilities are still a ways off, but are among the objectives for the Redox team.
Desktop
The development of Redox for the desktop is well underway. Although support for accelerated graphics is limited at this time, Redox does include a graphical user interface, and integration with Rust-written GUI libraries like winit, Iced and Slint are ongoing efforts.
A Demo version of Redox is available with several games and programs to try. However, the most important objective for desktop is to host the development of Redox. We are working through issues with some of our build tools, and other developer tools such as editors have not been tested under daily use, but we continue to make this a priority.
Due to a fairly limited list of currently supported hardware, once self-hosted development is available, developers may need to obtain a Redox-specific development machine. We are adding more hardware compatibility as quickly as we can, and we hope to be able to support Redox development on a wide array of desktops and notebooks in the near future.
Infrastructure
Redox's modular architecture make it ideal for many telecom infrastructure applications, such as routers, telecom components, edge servers, etc., especially as more functionality is added to these devices. There are no specific plans for remote management yet, but Redox's potential for security and reliability make it ideal for this type of application.
Embedded and IoT
For embedded systems with complex user interfaces and broad feature sets, Redox has the potential to be an ideal fit. As everyday appliances become Internet-connected devices with sensors, microphones and cameras, they have the potential for attacks that violate the privacy of consumers in the sanctity of their homes. Redox can provide a full-featured, reliable operating system while limiting the likelihood of attacks. At this time, Redox does not yet have touchscreen support, video capture, or support for sensors and buttons, but these are well-understood technologies and can be added as it becomes a priority.
Mission-Critical Applications
Although there are no current plans to create a version of Redox for mission-critical applications such as satellites or air safety systems, it's not beyond the realm of possibility. As tools for correctness proofs of Rust software improve, it may be possible to create a version of Redox that is proven correct, within some practical limits.
How Redox Compares to Other Operating Systems
We share quite a lot with other operating systems.
System Calls
The Redox system call interface is Unix-like. For example, we have open
, pipe
, pipe2
, lseek
, read
, write
, brk
, execv
, and so on. Currently, we support the most common Linux system calls.
Compared to Linux, our system call interface is much more minimal. This is not because of our development phase, but a minimalist design decision.
"Everything is a URL"
This is an generalization of "Everything is a file", largely inspired by Plan 9. In Redox, resources can be both socket-like and file-like, making them fast enough to use for virtually everything.
This way, we get a more unified system API. We will explain this later, in URLs, schemes, and resources.
The kernel
Redox's kernel is a microkernel. The architecture is largely inspired by MINIX and seL4.
In contrast to Linux or BSD, Redox has around 25,000 lines of kernel code, a number that is often decreasing. Most system services are provided in user-space as daemons.
Having vastly smaller amount of code in the kernel makes it easier to find and fix bugs/security issues in an efficient way. Andrew Tanenbaum (author of MINIX) stated that for every 1,000 lines of properly written C code, there is a bug. This means that for a monolithic kernel with nearly 25,000,000 lines of C code, there could be nearly 25,000 bugs. A microkernel with only 25,000 lines of C code would mean that around 25 bugs exist.
It should be noted that the extra lines are simply based outside of kernel-space, making them less dangerous, not necessarily a smaller number.
The main idea is to have system components and drivers that would be inside a monolithic kernel exist in user-space and follow the Principle of Least Authority (POLA). This is where every individual component is:
- Completely isolated in memory as separated user processes (daemons)
- The failure of one component does not crash other components
- Foreign and untrusted code does not expose the entire system
- Bugs and malware cannot spread to other components
- Has restricted communication with other components
- Doesn't have Admin/Super-User privileges
- Bugs are moved to user-space which reduces their power
All of this increases the reliability of the system significantly. This is useful for mission-critical applications and for users that want minimal issues with their computers.
Why Rust?
Why we wrote an operating system in Rust? Why even write in Rust?
Rust has enormous advantages, because for operating systems, safety and stability matters a lot.
Since operating systems are such an integrated part of computing, they are the most important software.
There have been numerous bugs and vulnerabilities in Linux, BSD, glibc, Bash, X11, etc. throughout time, simply due to the lack of memory and type safety. Rust does this right, by enforcing memory safety statically.
Design does matter, but so does implementation. Rust attempts to avoid these unexpected memory unsafe conditions (which are a major source of security critical bugs). Design is a very transparent source of issues. You know what is going on, you know what was intended and what was not.
The basic design of the kernel/user-space separation is fairly similar to Unix-like systems, at this point. The idea is roughly the same: you separate kernel and user-space, through strict enforcement by the kernel, which manages system resources.
However, we have an advantage: enforced memory and type safety. This is Rust's strong side, a large number of "unexpected bugs" (for example, undefined behavior) are eliminated at compile time.
The design of Linux and BSD is secure. The implementation is not. Many bugs in Linux originate in unsafe conditions (which Rust effectively eliminates) like buffer overflows, not the overall design.
We hope that using Rust we will produce a more secure and reliable operating system in the end.
Unsafes
unsafe
is a way to tell Rust that "I know what I'm doing!", which is often necessary when writing low-level code, providing safe abstractions. You cannot write a kernel without unsafe
.
In that light, a kernel cannot be 100% safe, however the unsafe parts have to be marked with an unsafe
, which keeps the unsafe parts segregated from the safe code. We seek to eliminate the unsafe
s where we can, and when we use unsafe
s, we are extremely careful.
This contrasts with kernels written in C, which cannot make guarantees about safety without costly formal analysis.
You can find out more about how unsafe
works in the relevant section of the Rust book.
Side Projects
Redox is a complete Rust operating system, in addition to the kernel, we are developing several side projects, including:
- RedoxFS - Redox file system inspired by ZFS.
- Ion - The Redox shell.
- Orbital - The desktop environment/display server of Redox.
- orbclient - Orbital client library.
- pkgutils - Redox package manager, with a command-line frontend and library.
- relibc - Redox C library.
- ralloc - A memory allocator.
- libextra - Supplement for libstd, used throughout the Redox code base.
- audiod - Redox audio server.
- bootloader - Redox boot loader.
- init - Redox init system.
- installer - Redox buildsystem builder.
- netstack - Redox network stack.
- redoxer - A tool to run/test Rust programs inside of a Redox VM.
- redox-linux - Redox userspace on Linux.
- sodium - A Vi-like editor.
- games - A collection of mini-games for Redox (alike BSD-games).
- OrbTK - Cross-platform Rust-written GUI toolkit (in maintenance mode).
- and a few other exciting projects you can explore here.
We also have three utility distributions, which are collections of small, useful command-line programs:
- coreutils - A minimal set of utilities essential for a usable system.
- extrautils - Extra utilities such as reminders, calendars, spellcheck, and so on.
- binutils - Utilities for working with binary files.
We also actively contribute to third-party projects that are heavily used in Redox.
- uutils/coreutils - Cross-platform Rust rewrite of the GNU Coreutils.
- smoltcp - The TCP/IP stack used by Redox.
What tools are fitting for the Redox distribution?
The necessary tools for a usable system, we offer variants with less programs.
The listed tools fall into three categories:
- Critical, which are needed for a full functioning and usable system.
- Ecosystem-friendly, which are there for establishing consistency within the ecosystem.
- Fun, which are "nice" to have and are inherently simple.
The first category should be obvious: an OS without certain core tools is a useless OS. The second category contains the tools which are likely to be non-default in the future, but nonetheless are in the official distribution right now, for the charm. The third category is there for convenience: namely for making sure that the Redox infrastructure is consistent and integrated (e.g., pkgutils, OrbTK, and libextra).
It is important to note we seek to avoid non-Rust tools, for safety and consistency (see Why Rust).
Getting started
Redox is still at the experimental/alpha stage, but there's lots you can do with it, and it's fun to try it out. You can download and run the latest release. See the instructions for running in a virtual machine or running on real hardware.
Building Redox has information about setting up your system to compile Redox, which is necessary if you want to contribute to Redox development. Advanced Build gives a look under the hood of the build process to help you maintain your build environment.
Running Redox in a virtual machine
The downloadable images for Redox are located here. To try Redox using a virtual machine such as QEMU or VirtualBox, download the demo_harddrive file, check the SHA sum to ensure it has downloaded correctly.
sha256sum $HOME/Downloads/redox_demo_x86_64*_harddrive.img
If you have more than one .img
file in the Downloads
directory, you may need to adjust this command.
You can then run the image in your preferred emulator. If you don't have an emulator installed, use the following command (Pop!_OS/Ubuntu/Debian) to install QEMU:
sudo apt-get install qemu-system-x86
This command will run qemu with various features Redox can use enabled:
SDL_VIDEO_X11_DGAMOUSE=0 qemu-system-x86_64 -d cpu_reset,guest_errors -smp 4 -m 2048 \
-chardev stdio,id=debug,signal=off,mux=on,"" -serial chardev:debug -mon chardev=debug \
-machine q35 -device ich9-intel-hda -device hda-duplex -netdev user,id=net0 \
-device e1000,netdev=net0 -device nec-usb-xhci,id=xhci -enable-kvm -cpu host \
-drive file=`echo $HOME/Downloads/redox_demo_x86_64*_harddrive.img`,format=raw
If you get an error with the filename, change the echo $HOME/Downloads/redox_demo_x86_64*_harddrive.img
command to the name of the file you downloaded.
Using the emulation
As the system boots, it will ask you for a screen resolution to use, e.g. 1024x768
. After selecting a screen size, the system will complete the boot, start the Orbital GUI, and display a Redox login screen. Login as user user
with no password. The password for root
is password
. Use Ctrl+Alt+G to toggle the mouse behavior if you need to zoom out or exit the emulation. If your emulated cursor is out of alignment with your mouse position, type Ctrl+Alt+G to regain full cursor control, then click on your emulated cursor. Ctrl+Alt+F toggles between full screen and window views.
See Trying Out Redox for things to try.
If you want to try Redox in server mode, add -nographic -vga none
to the command line above. You may wish to switch to the redox_server
edition. There are also i686 editions available, although these are not part of the release.
Running on Windows
To install QEMU on Windows, follow the instructions here. The installation of QEMU will probably not update your command path, so the necessary QEMU command needs to be specified using its full path. Or, you can add the installation folder to your Path
Environment Variable if you will be using it regularly.
Following the instructions for Linux above, download the same redox_demo image. Then, in a Command window, cd
to the location of the downloaded Redox image and run the following very long command:
"C:\Program Files\qemu\qemu-system-x86_64.exe" -d cpu_reset,guest_errors -smp 4 -m 2048 -chardev stdio,id=debug,signal=off,mux=on,"" -serial chardev:debug -mon chardev=debug -machine q35 -device ich9-intel-hda -device hda-duplex -netdev user,id=net0 -device e1000,netdev=net0 -device nec-usb-xhci,id=xhci -drive file=redox_demo_x86_64_2022-11-23_638_harddrive.img,format=raw
Note: If you get a filename error, change redox_demo_x86_64*_harddrive.img
to the name of the file you downloaded.
Note: If necessary, change "C:\Program Files\qemu\qemu-system-x86_64.exe"
to reflect where QEMU was installed. The quotes are needed if the path contains spaces.
Running Redox on real hardware
Since version 0.8.0, Redox can now be installed on a partition on certain hard drives and internal SSDs, including some vintage systems. USB drives are not yet supported during runtime, although they can be used for installation and livedisk boot. Check the release notes for additional details on supported hardware. Systems with unsupported drives can still use the livedisk method described below. Ensure you backup your data before trying Redox on your hardware.
Hardware support is limited at the moment, so your milage may vary. USB HID drivers are a work in progress but are not currently included, so a USB keyboard or mouse will not work. There is a PS/2 driver, which works with the keyboards and touchpads in many (but not all) laptops. For networking, the rtl8168d and e1000d ethernet controllers are currently supported.
On some computers, hardware incompatibilities, e.g. disk driver issues, can slow Redox performance. This is not reflective of Redox in general, so if you find that Redox is slow on your computer, please try it on a different model for a better experience.
The current ISO image uses a bootloader to load the filesystem into memory (livedisk) and emulates a hard drive. You can use the system in this mode without installing. Although its use of memory is inefficient, it is fully functional and does not require changes to your drive. The ISO image is a great way to try out Redox on real hardware.
Creating a bootable USB drive or CD
You can obtain a livedisk ISO image either by downloading the latest release, or by building one. The demo ISO is recommended for most laptops. After downloading completes, check the SHA256 sum:
sha256sum $HOME/Downloads/redox_demo_x86_64*_livedisk.iso
Copy the ISO image to a USB drive using the "clone" method with your preferred USB writer. You can also use the ISO image on a CD/DVD (ensure the ISO will fit on your disk).
Booting the system
Once the ISO image boots, the system will display the Orbital GUI. Log in as user user
with no password. The password for root
is password
.
See Trying Out Redox for things to try.
To switch between Orbital and the console, use the following keys:
- F1: Display the console log messages
- F2: Open a text-only terminal
- F3: Return to the Orbital GUI
If you want to be able to boot Redox from your HDD or SSD, follow the Installation instructions.
Redox isn't currently going to replace your existing OS, but it's a fun thing to try; boot Redox on your computer, and see what works.
Installing Redox on a drive
Once you have downloaded or built your ISO image, you can install it to your internal HDD or SSD. Please back up your system before attempting to install. Note that at this time (Release 0.8.0), you cannot install onto a USB drive, or use a USB drive for your Redox filesystem, but you can install from it.
After starting your livedisk system from a USB thumbdrive or from CD, log in as user user
with an empty password. Open a Terminal window and type:
sudo redox_installer_tui
If Redox recognizes your drive, it will prompt you to select a partition to install on. Choose carefully, as it will erase all the data on that partition. Note that if your drive is not recognized, it may offer you the option to install on disk/live
(the in-memory livedisk). Don't do this, as it will crash Redox.
Enter the number of the partition to install to. You will be prompted for a redoxfs password
. This is for a secure filesystem. Leave the password empty and press enter if a secure filesystem is not required.
Once the installation completes, power the system off, remove the USB, boot your system and you are ready to start using Redox!
Trying Out Redox
There are several games, demos and other things to try on Redox. Most of these are not included in the regular Redox build, so you will need to run the Demo system. There are both emulator and livedisk versions of the Demo system, available for download here. Currently, Redox does not have wifi support, so if you need wifi for some of the things you want to do, you are best to Run Redox in a Virtual Machine. Most of the suggestions below do not require network access, except where multiplayer mode is available.
On the Demo system, click on the Redox symbol in the bottom left corner of the screen. This brings up a menu, which, for the Demo system, has some games listed. Feel free to give them a try!
Many of the available commands are in the folders /bin
and /ui/bin
, which will be in your command path. Open a Terminal window and type ls file:/bin
(or just ls /bin
) to see some of the available commands. Some of the games listed below are in /games
, which is not in your command path by default, so you may have to specify the full path for the command.
Programs
General programs
Rusthello
Rusthello is an advanced Reversi AI, made by HenryTheCat. It is highly concurrent, so this acts as a demonstration of Redox's multithreading capabilities. It supports various AIs, such as brute force, minimax, local optimizations, and hybrid AIs.
In a Terminal window, type rusthello
.
Then you will get prompted for various things, such as difficulty, AI setup, and so on. When this is done, Rusthello interactively starts the battle between you and an AI or an AI and an AI.
Periodic Table
The Periodic Table /ui/bin/periodictable
is a demonstration of the OrbTk user interface toolkit, which is part of the Redox Orbital user interface.
Sodium
Sodium is Redox's Vi-like editor. To try it out, open a terminal window and type sodium
.
A short list of the Sodium defaults:
hjkl
- Navigation keysia
- Go to insert/append mode;
- Go to command-line mode- shift-space - Go to normal mode
For a more extensive list, write ;help
.
Games with a SDL backend
Freedoom
Freedoom is a first-person shooter in the form of content for a Doom engine. For Redox, we have included the PrBoom engine to run Freedoom. You can find the Freedoom website here. PrBoom can be found here. Click on the Redox logo on the bottom left, and choose Games
, then choose Freedoom
. Or open a Terminal window and try /games/freedoom1
or /games/freedoom2
.
Hit Esc
and use the arrow keys to select Options->Setup->Key Bindings for keyboard help.
Neverball and Nevergolf
Neverball and Nevergolf are 3D pinball and golf respectively. Click on the Redox logo on the bottom left, and choose Games
, then choose from the menu.
Sopwith
Sopwith is a game allows you to control a small plane. Originally written in 1984, it used PC graphics, but is now available using the SDL library. In a Terminal window type sopwith
.
- Comma ( , ) - Pull back
- Slash ( / ) - Push forward
- Period ( . ) - Flip aircraft
- Space - Fire gun
- B - Drop Bomb
Syobonaction
Syobon Action is 2D side-scrolling platformer that you won't enjoy. In a Terminal window, type syobonaction
. It's recommended that you read the GitHub page so you don't blame us.
Terminal Games Written in Rust
Also check out some games that have been written in Rust, and use the Terminal Window for simple graphics. In a Terminal window, enter one of the following commands:
baduk
- Baduk/Godem
- Democracyflappy
- Flappy Bird cloneice
- Ice Sliding Puzzleminesweeper
- Minesweeper but it wrapsreblox
- Tetris-like falling blocksredoku
- Sudokusnake
- Snake
Building Redox
Congrats on making it this far! Now we gotta build Redox. This process is for x86_64 machines. There are also similar processes for i686 and AArch64/Arm64.
The build process fetches files from the Redox Gitlab server. From time to time, errors may occur which may result in you being asked to provide a username and password during the build process. If this happens, first check for typos in the git
URL. If that doesn't solve the problem and you don't have a Redox GitLab login, try again later, and if it continues to happen, you can let us know through chat.
Supported Distros and Podman Build
This build process for the current release (0.8.0) is for Pop!_OS/Ubuntu/Debian. The recommended build environment for other distros is our Podman Build. Please follow those instructions instead. There is partial support for non-Debian distros in bootstrap.sh
, but it is not maintained.
Preparing the Build
Bootstrap Prerequisites And Fetch Sources
If you're on a supported Linux distro, you can just run the bootstrap script, which does the build preparation for you. First, ensure that you have the program curl
installed:
(This command is for Pop!_OS, Ubuntu or Debian, adjust for your system)
which curl || sudo apt-get install curl
Then run the following commands:
mkdir -p ~/tryredox
cd ~/tryredox
curl -sf https://gitlab.redox-os.org/redox-os/redox/raw/master/bootstrap.sh -o bootstrap.sh
time bash -e bootstrap.sh
You will be asked to confirm various installations. Answer in the affirmative (y or 1 as appropriate). The above does the following:
- installs the program
curl
if it is not already installed - creates a parent folder called
tryredox
. Within that folder, it will create another folder calledredox
where all the sources will reside. - installs the pre-requisite packages using your operating system's package manager(Pop!_OS/Ubuntu/Debian
apt
, Redhat/Centos/Fedoradnf
, Arch Linuxpacman
). - clones the Redox code from GitLab and checks out a redox-team tagged version of the different subprojects intended for the community to test and submit success/bug reports for.
Note that curl -sf
operates silently, so if there are errors, you may get an empty or incorrect version of bootstrap.sh. Check for typos in the command and try again. If you continue to have problems, join the chat and let us know.
Please be patient, this can take 5 minutes to an hour depending on the hardware and network you're running it on. Once it completes, update your path in the current shell with:
source ~/.cargo/env
Setting Config Values
The build system uses several configuration files, which contain settings that you may wish to change. These are detailed in Configuration Files. By default, the system builds for an x86_64
architecture, using the desktop
configuration (config/x86_64/desktop.toml
). Set the desired ARCH
and CONFIG_FILE
in .config. There is also a shell script build.sh that will allow you to choose the architecture and filesystem contents easily, although it is only a temporary change.
Compiling The Entire Redox Project
Now we have:
- fetched the sources
- tweaked the settings to our liking
- possibly added our very own source/binary package to the filesystem
We are ready to build the entire Redox Operating System Image. Skip ahead to build.sh if you want to build for a different architecture or with different filesystem contents.
Build all the components and packages
To build all the components, and the packages to be included in the filesystem.
cd ~/tryredox/redox
time make all
This will make the target build/x86_64/desktop/hardrive.img
, which you can run with an emulator.
Give it a while. Redox is big. This will do the following:
- fetch some sources for the core tools from the Redox source servers, then build them. As it progressively cooks each package, it fetches the package's sources and builds it.
- create a few empty files holding different parts of the final image filesystem.
- using the newly built core tools, build the non-core packages into one of those filesystem parts.
- fill the remaining filesystem parts appropriately with stuff built by the core tools to help boot Redox.
- merge the different filesystem parts into a final Redox Operating System image ready-to-run in an emulator.
Note that the filesystem parts are merged using the FUSE. Bootstrap.sh installs libfuse
. If you have problems with the final assembly of Redox, check that libfuse
is installed and you are able to use it.
build.sh
build.sh
is a shell script that allows you to easily specify the architecture you are building for, and the filesystem contents. When you are doing Redox development, you should set them in .config
(see Configuration Settings). But if you are just trying things out, use build.sh
to run make
for you. e.g.:
-
./build.sh -a i686 -c server live
- Runmake
for ani686
architecture, using theserver
configuration,config/i686/server.toml
. The resulting image isbuild/i686/server/livedisk.iso
, which can be used for installation from a USB. -
./build.sh -f config/aarch64/desktop.toml qemu
- Runmake
for anarm64/AArch64
architecture, using thedesktop
configuration,config/aarch64/desktop.toml
. The resulting image isbuild/aarch64/desktop/harddrive.img
, which is then run in the emulator QEMU.
If you use build.sh
, it's recommended that you do so consistently, as make
will not be aware of which version of the system you previously built with build.sh
. Details of build.sh
and other settings are described in Configuration Settings.
Run in an emulator
You can immediately run your image build/x86_64/desktop/harddrive.img
in an emulator with the following command:
make qemu
Note that if you built the system using build.sh
to change architecture or filesystem contents, you should also use it to run the emulator.
./build.sh -a i686 -c server qemu
will build build/i686/server/harddrive.img
(if it does not exist) and run it in the QEMU emulator.
The emulator will display the Redox GUI. See Using the emulation for general instructions and Trying out Redox for things to try.
Run with no GUI
To run the emulation with no GUI, use:
make qemu vga=no
If you want to capture the terminal output, use:
tee ~/my_log.txt
make qemu vga=no
exit
Running with no GUI is the recommended method of capturing console and debug output from the system or from your text-only program. The script
command creates a new shell, capturing all input and output from the text console to the log file with the given name. Remember to type exit
after the emulation terminates, in order to properly flush the output to the log file and terminate script
's shell.
If you have problems running the emulation, you can try make qemu kvm=no
or make qemu iommu=no
to turn off various virtualization features. These can also be used as arguments to build.sh
.
Running The Redox Console With A Qemu Tap For Network Testing
Expose Redox to other computers within a LAN. Configure Qemu with a "TAP" which will allow other computers to test Redox client/server/networking capabilities.
Join the Redox chat if this is something you are interested in pursuing.
Building Redox Live CD/USB Image
For a livedisk or installable image, use:
cd ~/tryredox/redox
time make live
This will make the target build/x86_64/desktop/livedisk.iso
, which can be copied to a USB drive or CD for booting or installation. See Creating a bootable USB drive or CD for instructions on creating a USB drive and booting from it.
Note
If you intend on contributing to Redox or its subprojects, please read Creating a Proper Pull Request so you understand our use of forks and set up your repository appropriately. You can use ./bootstrap.sh -d
in the redox
folder to install the prerequisite packages if you have already done a git clone
of the sources.
If you encounter any bugs, errors, obstructions, or other annoying things, please join the Redox chat or report the issue to the Redox repository. Thanks!
Podman Build
To make the Redox build process more consistent across platforms, we are using Rootless Podman for major parts of the build. Podman is invoked automatically and transparently within the Makefiles.
The TL;DR version is here. More details are available in Advanced Podman Build.
You can find out more about Podman here.
Disabling Podman Build
By default, Podman Build is disabled. The variable PODMAN_BUILD
in mk/config.mk
defaults to zero, so that Podman will not be invoked. If you find that it is enabled but you want it disabled, set PODMAN_BUILD?=0
in .config, and ensure it is not set in your environment, unset PODMAN_BUILD
.
Podman Build Overview
Podman is a virtual machine manager that creates containers to execute a virtual machine image. In our case, we are creating an Ubuntu image, with a Rust installation and all the packages needed to build the system.
The build process is performed in your normal working directory, e.g. ~/tryredox/redox
. Compilation of the Redox components is performed in the container, but the final Redox image (build/$ARCH/$CONFIG/harddrive.img
or build/$ARCH/$CONFIG/livedisk.iso
) is constructed using FUSE running directly on your host machine.
Setting PODMAN_BUILD
to 1 in .config, on the make
command line (e.g. make PODMAN_BUILD=1 all
) or in the environment (e.g. export PODMAN_BUILD=1; make all
) will cause Podman to be invoked when building.
First, a base image called redox_base
will be constructed, with all the necessary packages for the build. A "home" directory will also be created in build/podman
. This is the home directory of your container alter ego, poduser
. It will contain the rustup
install, and the .bashrc
. This takes some time, but is only done when necessary. The tag file build/container.tag is also created at this time to prevent unnecessary image builds.
Then, various make
commands are executed in containers built from the base image. The files are constructed in your working directory tree, just as they would for a non-Podman build. In fact, if all necessary packages are installed on your host system, you can switch Podman on and off relatively seamlessly, although there is no benefit to doing so.
The build process is using Podman's keep-id
feature, which allows your regular User ID to be mapped to poduser
in the container. The first time a container is built, it takes some time to set up this mapping. After the first container is built, new containers can be built almost instantly.
TL;DR - New or Existing Working Directory
New Working Directory
If you have already read the Building Redox instructions, but you wish to use Podman Build, follow these steps.
- Make sure you have the
curl
command. e.g. for Pop!_OS/Ubuntu/Debian:
which curl || sudo apt-get install curl
- Make a directory, get a copy of
podman_bootstrap.sh
and run it. This will clone the repository and install Podman.
mkdir -p ~/tryredox
cd ~/tryredox
curl -sf https://gitlab.redox-os.org/redox-os/redox/raw/master/podman_bootstrap.sh -o podman_bootstrap.sh
time bash -e podman_bootstrap.sh
- Change to the
redox
directory.
cd ~/tryredox/redox
- Check that the file .config was created in the
redox
base directory, and contains the linePODMAN_BUILD?=1
. - Build the system. This will take some time.
time make all
Existing Working Directory
If you already have the source tree, you can use these steps.
- Change to your working directory and get the updates to the build files.
cd ~/tryredox/redox
git fetch upstream master
git rebase upstream/master
- Install Podman. Many distros require additional packages. Check the Minimum Installation instructions to see what is needed for your distro. Or, run the following in your
redox
base` directory:
./podman_bootstrap.sh -d
- Set
PODMAN_BUILD
to 1 and runmake
. The first container setup can take 15 minutes or more, but it is comparable in speed to native build after that.
export PODMAN_BUILD=1
make all
make qemu
To ensure PODMAN_BUILD
is properly set for future builds, edit .config in your base redox
directory and change its value.
nano .config
PODMAN_BUILD?=1
Configuration Settings
There are many configurable settings that affect what edition of Redox you build, and how you build it.
mk/config.mk
The build system uses several makefiles, most of which are in the directory mk
. We have grouped together most of the settings that might be interesting into mk/config.mk
. However, it's not recommended that you change them there, especially if you are contributing to the Redox project. See .config below.
Open mk/config.mk
in your favorite editor and have a look through it (but don't change it), e.g.
nano mk/config.mk
Environment and Command Line
You can temporarily override some of the settings in mk/config.mk
by setting them either in your environment or on the make
command line, e.g.
make CONFIG_NAME=demo qemu
Overriding the settings in this way is only temporary. Also, if you are using Podman Build, some settings may be ignored, so you are best to use .config.
.config
To permanently override any of the settings in mk/config.mk, create a file .config
in your redox
base directory (i.e. where you run the make
command). Set the values in that file, e.g.
ARCH?=i686
CONFIG_NAME?=desktop-minimal
If you used podman_bootstrap.sh, this file may have been created for you already. The setting PODMAN_BUILD?=1
must include the ?=
operator, as Podman Build changes its value during the build process.
The purpose of .config
is to allow you to change your configuration settings without worrying that they will end up in a Pull Request. .config
is in the .gitignore
list, so you won't accidentally commit it.
Architecture Names
The Redox build system support cross-compilation to any CPU architecture defined by the ARCH
environment variable, these are the supported architectures based on the folders inside the config folder.
- i686 -
i686
- x86_64 -
x86_64
- ARM64 -
aarch64
Filesystem Config
Which packages and programs to include in the Redox image are determined by a filesystem config file, which is a .toml
file, such as config/x86_64/demo.toml
. Open demo.toml
and have a look through it.
nano config/x86_64/demo.toml
For each supported CPU architecture, there are one or more filesystem configs to choose from. For x86_64
, there are desktop
, demo
and server
configurations, as well as a few others. For i686
, there are also some stripped down configurations for legacy systems with minimal RAM. Have a look in the directory config/x86_64
for some examples.
For more details on the filesystem config, and how to include extra packages in your build, please see Including Programs in Redox.
Feel free to create your own filesystem config.
Filesystem Size
Filesystem size is the total amount of space allocated for the filesystem that is built into the image, including all packages and programs. It is specified in Megabytes (MB). The typical size is 256MB, although the demo
config is larger. The filesystem needs to be large enough to accommodate the packages that are included in the filesystem. For the livedisk system, don't exceed the size of your RAM, and leave room for the system to run.
The value for filesystem size is normally set from the filesystem config file, e.g. config/x86_64/demo.toml
.
...
filesystem_size = 768
...
If you wish to change it, it is recommended that you create your own filesystem config and edit it there. However, you can override it temporarily in your environment or on the make
command line, e.g.:
make FILESYSTEM_SIZE=512 qemu
ARCH, FILESYSTEM_CONFIG and CONFIG_NAME
In mk/config.mk
, you will find the variables ARCH
, CONFIG_NAME
and FILESYSTEM_CONFIG
. These three variables determine what system you are building.
ARCH
: the CPU architecture that you are building the system for. Currently supported architectures arex86_64
(the default),i686
andaarch64
.CONFIG_NAME
: used to determine part of the name of the Redox image, and normally used to build theFILESYSTEM_CONFIG
name (desktop
by default).FILESYSTEM_CONFIG
: a file that describes the packages and files to include in the filesystem. See Filesystem Config above. The default isconfig/$ARCH/$CONFIG_NAME.toml
, but you can change it if your config file is in a different location.
If you want to change them permanently, edit .config in your redox
base directory and and provide new values.
nano .config
ARCH?=i686
CONFIG_NAME?=desktop_minimal
Or, you can set the values temporarily in your environment or on your make
command line, e.g. export ARCH=i686; make all
or make ARCH=i686 all
. The first example sets the value for the lifetime of the current shell, while the second sets the value only or the current make
.
The Redox image that is built is named build/$ARCH/$CONFIG_NAME/harddrive.img
or build/$ARCH/$CONFIG/livedisk.iso
.
build.sh
The script build.sh
allows you to easily set ARCH
, FILESYSTEM_CONFIG
and CONFIG_NAME
when running make
. If you are not changing the values very often, it is recommended you set the values in .config rather than use build.sh
. But if you are testing against different architectures or configurations, then this script can help minimize effort, errors and confusion.
./build.sh [-a ARCH] [-c CONFIG_NAME] [-f FILESYSTEM_CONFIG] TARGET...
The TARGET
is any of the available make
targets, although the recommended target is qemu
. You can also include certain variable settings such as vga=no
.
-
-f FILESYSTEM_CONFIG
allows you to specify a filesystem config file, which can be in any location but is normally in the directoryconfig/$ARCH
.If you do specify
-f FILESYSTEM_CONFIG
, but not-a
or-c
, the file path determines the other values. Normally the file would be located at e.g.config/x86_64/desktop.toml
.ARCH
is determined from the second last element of the path. If the second last element is not a knownARCH
value, you must specify-a ARCH
.CONFIG_NAME
is determined from the basename of the file. -
-a ARCH
is the CPU architecture you are building for,x86_64
,i686
oraarch64
. The uppercase options-X
,-6
and-A
can be used as shorthand for-a x86_64
,-a i686
and-a aarch64
respectively. -
-c CONFIG_NAME
is the name of the configuration, which appears in both the name of the image being built and (usually) the filesystem config.If you do not specify
-f FILESYSTEM_CONFIG
, the value ofFILESYSTEM_CONFIG
is constructed fromARCH
andCONFIG_NAME
,config/$ARCH/$CONFIG_NAME.toml
.The default value for
ARCH
isx86_64
and forCONFIG_NAME
isdesktop
, which produces a default value forFILESYSTEM_CONFIG
ofconfig/x86_64/desktop.toml
.
REPO_BINARY
If REPO_BINARY
set to 1 (REPO_BINARY?=1
), your build system will become binary-based for recipes, this is useful for some purposes, such as making development builds, test package status and save time with heavy softwares.
You can have a mixed binary/source build, when you enable REPO_BINARY
it treat every recipe with a {}
a binary package and recipes with "recipe"
are treated as source, both inside of your TOML config (config/$ARCH/$CONFIG_NAME.toml
), example:
[packages]
...
recipe1 = {} # binary package
recipe2 = "recipe" # source
...
Other Config Values
You can override other variables in your .config. Some interesting values in mk/config.mk
are:
PREFIX_BINARY
- If set to 1 (PREFIX_BINARY?=1
), the build system don't compile from toolchain sources but download/install them from Redox CI server. This can save lots of time during your first build. Note: If you are using Podman, you must set these variables in .config in order for your change to have any effect. Setting them in the environment or on the command line may not be effective.REPO_BINARY
- If set to 1 (REPO_BINARY?=1
), the build system don't compile from recipe sources but download/install packages from Redox package server.FILESYSTEM_SIZE
: The size in MB of the filesystem contained in the Redox image. See Filesystem Size before changing it.REDOXFS_MKFS_FLAGS
: Flags to the program that builds the Redox filesystem.--encrypt
enables disk encryption.PODMAN_BUILD
: If set to 1 (PODMAN_BUILD?=1
), the build environment is constructed in Podman. See Podman Build.CONTAINERFILE
: The Podman containerfile. See Podman Build.
Downloading packages with pkg
pkg
is the Redox package manager which allows you to add binary packages to a running system. If you want to compile packages, or include binary packages during the build, please see Including Programs in Redox.
You may get better results in an emulator like QEMU than in real hardware (due to limited network devices support).
This tool can be used instead of make rebuild
if you add a new recipe on your TOML config (desktop.toml
for example).
- Clean an extracted package
pkg clean package-name
- Create a package
pkg create package-name
- Extract a package
pkg extract package-name
- Download a package
pkgfetch package-name
- Install a package
pkg install package-name
- List package contents
pkg list package-name
- Get a file signature
pkg sign package-name
- Upgrade all installed packages
pkg upgrade
- Replace
command
by one of the above options to have detailed information about them
pkg help command
All commands needs to be run with sudo
because /bin
and /pkg
belongs to root.
The available packages can be found here.
Questions, Feedback, Reporting Issues
Join the Redox Chat. It is the best method to chat with the Redox team.
You can find historical questions in our Discourse Forum.
If you would like to report Redox Issues, please do so here Redox Project GitLab Issues and click the New Issue button.
The Design of Redox
This part of the Book will go over the design of Redox: the kernel, the user space, the ecosystem, the trade-offs and much more.
The system design of Redox
This chapter will discuss the design of the Redox.
Microkernels
Redox's kernel is a microkernel. Microkernels stand out in their design by providing minimal abstractions in kernel-space. Microkernels focus on user-space, unlike Monolithic kernels which focus on kernel-space.
The basic philosophy of microkernels is that any component which can run in user-space should run in user-space. Kernel-space should only be utilized for the most essential components (e.g., system calls, process separation, resource management, IPC, thread management, etc).
The kernel's main task is to act as a medium for communication and segregation of processes. The kernel should provide minimal abstraction over the hardware (that is, drivers, which can and should run in user-space).
Microkernels are more secure and less prone to crashes than monolithic kernels. This is because most kernel components are moved to user-space, and thus cannot do damage to the system. Furthermore, microkernels are extremely maintainable, due to their small code size, this can potentially reduce the number of bugs in the kernel.
As anything else, microkernels do also have disadvantages.
Advantages of microkernels
There are quite a lot of advantages (and disadvantages) with microkernels, a few of which will be covered here.
Modularity and customizability
Monolithic kernels are, well, monolithic. They do not allow as fine-grained control as microkernels. This is due to many essential components being "hard-coded" into the kernel, and thus requiring modifications to the kernel itself (e.g., device drivers).
Microkernels are very modular by nature. You can replace, reload, modify, change, and remove modules, on runtime, without even touching the kernel.
Modern monolithic kernels try to solve this issue using kernel modules but still often require the system to reboot.
Security
Microkernels are undoubtedly more secure than monolithic kernels. The minimality principle of microkernels is a direct consequence of the principle of least privilege, according to which all components should have only the privileges absolutely needed to provide the needed functionality.
Many security-critical bugs in monolithic kernels stem from services and drivers running unrestricted in kernel mode, without any form of protection.
In other words: in monolithic kernels, drivers can do whatever they want, without restrictions, when running in ring 0.
Fewer crashes
When compared to microkernels, Monolithic kernels tend to be crash-prone. A buggy driver in a Monolithic kernel can crash the whole system whereas with a microkernel there is a separation of concerns which allows the system to handle any crash safely.
In Linux we often see errors with drivers dereferencing bad pointers which ultimately results in kernel panics.
There is very good documentation in MINIX about how this can be addressed by a microkernel.
Sane debugging
In microkernels the kernel components (drivers, filesystems, etc) are moved to user-space, thus bugs on them don't crash the kernel.
This is very important to debug in real hardware, because if a kernel panic happens, the log can't be saved to find the cause of the bug.
In monolithic kernels, a bug in kernel component will cause a kernel panic and lock the system (if it happens in real hardware, you can't debug without serial output support)
(Buggy drivers are the main cause of kernel panics)
Disadvantages of microkernels
Performance
Any modern operating system needs basic security mechanisms such as virtualization and segmentation of memory. Furthermore any process (including the kernel) has its own stack and variables stored in registers. On context switch, that is each time a system call is invoked or any other inter-process communication (IPC) is done, some tasks have to be done, including:
- Saving caller registers, especially the program counter (caller: process invoking syscall or IPC)
- Reprogramming the MMU's page table (aka TLB)
- Putting CPU in another mode (kernel mode and user mode, also known as ring 0 and ring 3)
- Restoring callee registers (callee: process invoked by syscall or IPC)
These are not inherently slower on microkernels, but microkernels need to perform these operations more frequently. Many of the system functionality is performed by user-space processes, requiring additional context switches.
The performance difference between monolithic and microkernels has been marginalized over time, making their performance comparable. This is partly due to a smaller surface area which can be easier to optimize.
Unfortunately, Redox isn't quite there yet. We still have a relatively slow kernel since not much time has been spent on optimizing it.
Versus monolithic kernels
Monolithic kernels provide a lot more abstractions than microkernels.
The above illustration from Wikimedia, by Wooptoo, License: Public domain) shows how they differ.
Documentation about the kernel/user-space separation
Documentation about microkernels
- OSDev technical wiki
- Message passing documentation
- Minix documentation
- Minix features
- Minix reliability
- GNU Hurd documentation
- Fuchsia documentation
- HelenOS FAQ
- Minix paper
- seL4 whitepaper
- Microkernels performance paper
- Tanenbaum-Torvalds debate
A note on the current state
Redox has less than 30,000 lines of kernel code. For comparison Minix has ~6,000 lines of kernel code.
We would like to move more parts of Redox to user-space to get an even smaller kernel.
Redox kernel
System calls are generally simple, and have a similar ABI compared to regular function calls. On x86_64, it simply uses the syscall
instruction, causing a mode switch from user-mode (ring 3) to kernel-mode (ring 0), and when the system call handler is finished, it mode switches back, as if the syscall
instruction was a regular call
instruction, using sysretq
.
Boot Process
Bootloader
The bootloader source can be found in cookbook/recipes/bootloader/source
after a successful build, or at https://gitlab.redox-os.org/redox-os/bootloader.
BIOS boot
The first code to be executed on x86 systems using BIOS is the boot sector, called stage 1, which is written in Assembly, and can be found in asm/x86-unknown-none/stage1.asm
. This loads the stage 2 bootloader from disk, which is also written in Assembly. This stage switches to 32-bit mode and finally loads the Rust bootloader, called stage 3. These three bootloader stages are combined in one executable written to the first megabyte of the storage device. At this point, the bootloader follows the same common boot process on all boot methods, which can be seen in a later section.
EFI boot
Common boot process
The bootloader initializes the memory map and the display mode, both of which rely on firmware mechanisms that are not acccessible after control is switched to the kernel. The bootloader then finds the RedoxFS partition on the disk and loads the kernel
, bootstrap
, and initfs
into memory. It maps the kernel to its expected virtual address, and jumps to its entry function.
Kernel
The Redox kernel performs (fairly significant) architecture-specific initialization in the kstart
function before jumping to the kmain
function. At this point, the user-space bootstrap, a specially prepared executable that limits the required kernel parsing, sets up the initfs
scheme, and loads and executes the init
program.
Init
Redox has a multi-staged init process, designed to allow for the loading of disk drivers in a modular and configurable fashion. This is commonly referred to as an init ramdisk.
Ramdisk Init
The ramdisk init has the job of loading the drivers required to access the root filesystem and then transfer control to the filesystem init. This contains drivers for ACPI, for the framebuffer, and for AHCI, IDE, and NVMe disks. After loading all disk drivers, the RedoxFS driver is executed with the UUID of the partition where the kernel and other boot files were located. It then searches every driver for this partition, and if it is found, mounts it and then allows init to continue.
Filesystem Init
The filesystem init continues the loading of drivers for all other functionality. This includes audio, networking, and anything not required for disk access. After this, the login prompt is shown and, on desktop configurations, the Orbital display server is launched.
Login
After the init processes have set up drivers and daemons, it is possible for the user to log in to the system. The login program accepts an username, with a default user called user
, prints the /etc/motd
file, and then executes the user's login shell, usually ion
. At this point, the user will now be able to access the Shell
Graphical overview
Here is an overview of the initialization process with scheme creation and usage. For simplicity's sake, we do not depict all scheme interaction but at least the major ones. THIS IS CURRENTLY OUT OF DATE, BUT STILL INFORMATIVE
Boot process documentation
Memory Management
We still need to document our memory system.
Scheduling on Redox
The Redox kernel uses a scheduling algorithm called Round Robin Scheduling.
The kernel registers a function called an interrupt handler that the CPU calls periodically. This function keeps track of how many times it is called, and will schedule the next process ready for scheduling every 10 "ticks".
System Services in User Space
As any microkernel-based operating system, most kernel components are moved to user-space and adapted to work on it.
Monolithic kernels in general have hundreds of system calls due to the high number of kernel components (system calls are interfaces for these components), not to mention the number of sub-syscalls provided by ioctl and e.g. procfs/sysfs. Microkernels on the other hand, can have dozens of them.
This happens because the non-core kernel components are moved to user-space, thereby relying on IPC instead, which we will later explain.
Userspace bootstrap is the first program launched by the kernel, and has a simple design. The kernel loads the initfs
blob, containing both the bootstrap executable itself and the initfs image, that was passed from the bootloader. It creates an address space containing it, and jumps to a bootloader-provided offset. Bootstrap allocates a stack (in an Assembly stub), mprotect
s itself, and does the remaining steps to exec the init
daemon. It also sets up the initfs
scheme daemon.
The system calls used for IPC, are almost exclusively file-based. The kernel therefore has to know what schemes to forward certain system calls to. All file syscalls are marked with either SYS_CLASS_PATH
or SYS_CLASS_FILE
. The kernel associates paths with schemes by checking their scheme prefix against the scheme's name, in the former case, and in the latter case, the kernel simply remembers which scheme opened file descriptors originated from. Most IPC in general is done using schemes, with the exception of regular pipes like Linux has, which uses pipe2
, read
, write
, close
. Any scheme can also of course setup its own custom pipe-like IPC that also uses the aforementioned syscalls, like shm:
and chan:
from ipcd
.
Schemes are implemented as a regular Rust trait in the kernel. Some builtin kernel schemes exist, which just implement that trait. Userspace schemes are provided via the UserScheme
trait implementor, which relies on messages being sent between the kernel and the scheme daemon. This channel is created by scheme daemons when opening :SCHEME_NAME
, which is parsed to the root scheme ""
with path "SCHEME_NAME"
. Messages are sent by reading from and writing to that root scheme file descriptor.
So all file-based syscalls on files owned by userspace, will send a message to that scheme daemon, and when the result is sent back, the kernel will return that result back to the process doing the syscall.
Communication between userspace and the kernel, is generally fast, even though the current syscall handler implementation is somewhat unoptimized. Systems with Meltdown mitigations would be an exception, although such mitigations are not yet implemented.
Drivers
On Redox the device drivers are user-space daemons, being a common Unix process they have their own namespace with restricted schemes.
In other worlds, a driver on Redox can't damage other system interfaces, while on Monolithic kernels a driver could wipe your data, because the driver run on the same address space of the filesystem (thus same privilege level).
You can find the driver documentation on the repository README and drivers code.
RedoxFS
This is the default filesystem of Redox OS, inspired by ZFS and adapted to a microkernel architecture.
Redox had a read-only ZFS driver but it was abandoned because of the monolithic nature of ZFS that created problems with the Redox microkernel design.
(It's a replacement for TFS)
Current features:
- Compatible with Redox and Linux
- Copy-on-write
- Data/metadata checksums
- Transparent encryption
- Standard Unix file attributes
- File/directory size limit up to 193TiB (212TB)
- File/directory quantity limit up to 4 billion per 193TiB (2^32 - 1 = 4294967295)
- MIT licensed
- Disk encryption fully supported by the Redox bootloader, letting it load the kernel off an encrypted partition.
Being MIT licensed, RedoxFS can be added on GPL kernels (Linux, for example).
Graphics and Windowing
Drivers
vesad (VESA)
It's not really a driver, it writes to a framebuffer given by firmware (via UEFI or BIOS software interrupts).
Because we don't have GPU drivers yet, we rely on what firmware gives to us.
GPU drivers
On Linux/BSDs, the GPU communication with the kernel is done by the DRM system (Direct Renderig Manager, libdrm
library), that Mesa3D drivers use to work (Mesa3D implement OpenGL/Vulkan drivers, DRM expose the hardware interfaces).
Said this, on Redox the GPU driver needs to be an user-space daemon which use the system calls/schemes to talk with the hardware.
The last step is to adapt our Mesa3D fork/recipe to use these user-space daemons.
Accelerated Graphics
We don't have GPU drivers yet but LLVMpipe (OpenGL CPU emulation) and VirtIO (2D/3D accleration from/for a virtual machine) is working.
Orbital
The Orbital desktop environment provides a display server, window manager and compositor.
- The display server is more simple than Wayland, making the porting task more quick and easy.
Libraries
The programs written with these libraries can run on Orbital.
- SDL1.2
- SDL2
- winit
- softbuffer
- Slint (use winit/softbuffer)
- Iced (use winit/softbuffer)
- egui (can use winit or SDL2)
Features
- Custom Resolutions
- App Launcher (bottom bar)
- File Manager
- Text Editor
- Calculator
- Terminal Emulator
If you hold the Super key (generally the key with a Windows logo) it will show all keyboard shortcuts in a pop-up.
Security
This page covers the current Redox security mechanisms.
- Namespaces and a capability-based system, both are implemented by the kernel but some parts can be moved to user-space.
- A namespace is a list of schemes, if you run
ls :
, it will show the schemes on the current namespace. - Each process has a namespace.
- Capabilities are customized file descriptors that carry specific actions.
Sandbox
Redox allows limiting a program's capabilities and thus allows sandboxing, by:
-
- By only putting a certain number of schemes in the program's namespace, or no scheme at all, in which case new file descriptors can't be opened.
-
- By forcing all functionality to occur via file descriptors (it's not finished yet).
URLs, Schemes, and Resources
An essential design choice made for Redox is to refer to resources using URL-style naming. This gives Redox the ability to
- treat resources (files, devices, etc.) in a consistent manner
- provide resource-specific behaviors with a common interface
- allow management of names and namespaces to provide sandboxing and other security features
- enable device drivers and other system resource management to communicate with each other using the same mechanisms available to user programs
What is a Resource
A resource is anything that a program might wish to access, usually referenced by some name. It may be a file in a filesystem, or frame buffer on a graphics device, or a dataset provided by some other computer.
What is a URL
A Uniform Resource Locator (URL) is a string that identifies some thing (resource) that a program wants to refer to. It follows a format that can be divided easily into component parts. In order to fully understand the meaning and interpretation of a URL, it is important to also understand URI and URN.
What is a Scheme
For the purposes of Redox, a URL includes a scheme that identifies the starting point for finding a resource, and a path that gives the details of which specific resource is desired. The scheme is the first part of the URL, up to (and for our purposes including) the first :
. In a normal web URL, e.g. https://en.wikipedia.org/wiki/Uniform_Resource_Name
, https:
represents the communication protocol to be used. For Redox, we extend this concept to include not only protocols, but other resource types, such as file:
, display:
, etc., which we call schemes.
URLs
The URL is a string name in specific format, normally used to identify resources across the web. For typical web usage, a URL would have the following format.
protocol://hostname/resource_path/resource_path/resource_name?query#fragment
- The
protocol
tells your web browser how to communicate with the remote host. - The
hostname
is used by the protocol to find the host computer that has the desired resource. - The
resource_path
is used by the host computer's web server to find the desired resource. In a web server that includes e.g. static HTML as well as dynamically generated content, the web server uses arouter
to interpret theresource_path
to find the desired resource, perhaps returning a static HTML file, or invoking a PHP interpreter on a PHP page. A path can consist of one or more parts separated by/
. - The
resource_name
is the logical "page" to be displayed. - The
query
can provide additional details, such as a string to be used as a search parameter. Some websites include query details as an element of theresource_name
, e.g.profile/John_Smith
while other websites use the?
format following the resource name, e.g.profile?name=John+Smith
orcustomer/John_Smith?display=profile
. - The
fragment
can provide extra detail to your browser about what to display. For example, it can identify a subheading within a page, that your browser should scroll to. Fragments are not normally sent to the host computer, they are processed locally by your browser.
Redox URLs
For the purposes of Redox, a URL contains two parts:
-
The scheme part. This represents the "receiver", i.e. what scheme provider will handle the (F)OPEN call. This can be any arbitrary UTF-8 string, and will often simply be the name of your protocol or the name of a program that manages your resource.
-
The reference part. This represents the "payload" of the URL, namely what the URL refers to. Consider
file:/home/user/.profile
, as an example. A URL starting withfile:
simply has a reference which is a path to a file. The reference can be any arbitrary byte string. The parsing, interpretation, and storage of the reference is left to the scheme. For this reason, it is not required to be a tree-like structure.
So, the string representation of a URL looks like:
[scheme]:[reference]
For example:
file:/path/to/myfile
For Redox, the format of the reference is determined by the scheme provider. Normally, the reference is a path, with path elements separated by the /
character. But other formats for references may be more appropriate, so each scheme must document what format the reference should take. Redox does not yet have a formal way of documenting its use of URL formats, but will provide one in future.
Any URL that starts with a scheme is assumed to use an absolute path, so //
or /
is not technically required for the first element. By convention, /
is included before the first element when a multi-part path is supported by the scheme, but it is not included if the scheme doesn't use path-style references.
A URL that does not start with a scheme name is considered to be "relative to the current working directory". When a program starts, or when the program "changes directory", it has a "current working directory" that is prepended to the relative path. It's possible to "change directory" to a scheme, even if that scheme doesn't support the concept of directories or folders. Then all relative paths are prepended with the "current directory" (in this case a scheme name).
Some examples of URLs that Redox might use are:
file:/home/user/.profile
- The file.profile
that can be found at the location/home/user
within the schemefile:
tcp:127.0.0.1:3000
- Internet address127.0.0.1
, port3000
, using the schemetcp:
display:3
- Virtual display3
provided by the display manager.
Opening a URL
A URL can be opened, yielding a file descriptor that is associated with a specific resource, which can be read, written and (for some resources) seek'd (there are more operations which are described later on).
We use a file API similar to the Rust standard library's for opening URLs:
use std::fs::OpenOptions; use std::io::prelude::*; fn main() { // Let's read from a TCP stream let tcp = OpenOptions::new() .read(true) // readable .write(true) // writable .open("tcp:127.0.0.1:3000"); }
Resources
A resource is any "thing" that can be referred to using a URL or a path. It can be a physical device, a logical pseudodevice, a file on a file system, a service that has a name, or an element of a dataset.
The client program accesses a resource by opening it, using the resource name in URL format. The first part of the URL is the name of the scheme, and the rest of the URL is interpreted by the scheme provider, assigning whatever meaning is appropriate for the resources included under that scheme.
Resource Examples
TODO Give good examples of resources
Schemes
Within Redox, a scheme may be thought of in a few ways. It is all of these things.
- It is the type of a resource, such as "file", "M.2 drive", "tcp connection", etc. (Note that these are not valid scheme names, they are just given by way of example.)
- It is the starting point for locating the resource, i.e. it is the root of the path to the resource, which the system can then use in establishing a connection to the resource.
- It is a uniquely named service that is provided by some driver or daemon program, with the full URL identifying a specific resource accessed via that service.
Kernel vs. Userspace Schemes
Schemes are implemented by scheme providers. A userspace scheme is implemented by a program running in user space, currently requiring root
permission. A kernel scheme is implemented by the kernel directly. When possible, schemes should be implemented in userspace. Only critical schemes are implemented in kernel space.
Accessing Resources
In order to provide "virtual file" behavior, schemes generally implement file-like operations. However, it is up to the scheme provider to determine what each file-like operation means. For example, seek
to an SSD driver scheme might simply add to a file offset, but to a floppy disk controller scheme, it might cause the physical movement of disk read-write heads.
Typical scheme operations include:
open
- Create a handle (file descriptor) to a resource provided by the scheme. e.g.File::create("tcp:127.0.0.1:3000")
in a regular program would be converted by the kernel intoopen("127.0.0.1:3000")
and sent to the "tcp:" scheme provider. The "tcp:" scheme provider would parse the name, establish a connection to Internet address "127.0.0.1", port "3000", and return a handle that represents that connection.read
- get some data from the thing represented by the handle, normally consuming that data so the nextread
will return new data.write
- send some data to the thing represented by the handle to be saved, sent or written.seek
- change the logical location that the nextread
orwrite
will occur. This may or may not cause some action by the scheme provider.
Schemes may choose to provide other standard operations, such as mkdir
, but the meaning of the operation is up to the scheme. mkdir
might create a directory entry, or it might create some type of substructure or container relevant to that particular scheme.
Some schemes implement fmap
, which creates a memory-mapped area that is shared between the scheme resource and the scheme user. It allows direct memory operations on the resource, rather than reading and writing to a file descriptor. The most common use case for fmap
is for a device driver to access the physical addresses of a memory-mapped device, using the memory:
kernel scheme. It is also used for frame buffers in the graphics subsystem.
TODO add F-operations.
TODO Explain file-like vs. socket-like schemes.
Userspace Schemes
Redox creates userspace schemes during initialization, starting various daemon-style programs, each of which can provide one or more schemes.
Name | Daemon | Description |
---|---|---|
disk: | ahcid , nvmed | Raw access to disks |
display: | vesad | Screen multiplexing of the display, provides text and graphical screens, used by orbital: |
ethernet: | ethernetd | Raw ethernet frame send/receive, used by ip: |
file: | redoxfs | Root filesystem |
ip: | ipd | Raw IP packet send/receive |
network: | e1000d , rtl8168d | Link level network send/receive, used by ethernet: |
null: | nulld | Scheme that will discard all writes, and read no bytes |
orbital: | orbital | Windowing system |
pty: | ptyd | Pseudoterminals, used by terminal emulators |
rand: | randd | Pseudo-random number generator |
tcp: | tcpd | TCP sockets |
udp: | udpd | UDP sockets |
zero: | zerod | Scheme that will discard all writes, and always fill read buffers with zeroes |
Kernel Schemes
The kernel provides a small number of schemes in order to support userspace.
Name | Documentation | Description |
---|---|---|
: | root.rs | Root scheme - allows the creation of userspace schemes |
debug: | debug.rs | Provides access to serial console |
event: | event.rs | Allows reading of `Event` s which are registered using fevent |
irq: | irq.rs | Allows userspace handling of IRQs |
pipe: | pipe.rs | Used internally by the kernel to implement pipe |
sys: | mod.rs | System information, such as the context list and scheme list |
memory: | memory.rs | Access to memory, typically physical memory addresses |
"Everything is a URL"
"Everything is a URL" is a generalization of "Everything is a file", allowing broader use of this unified interface for a variety of purposes. Every resource that can be referenced by a program can be given a name in URL format.
"Everything is a URL" is an important principle in the design of Redox. Roughly speaking it means that the API, design, and ecosystem is centered around URLs, schemes, and resources as the main communication primitive. Applications communicate with each other, the system, daemons, etc, using URLs. As such, specialized system programs do not have to create their own constructs for communication.
By unifying the API in this way, you are able to have a consistent, clean, and flexible interface.
We can't really claim credit for this concept (beyond our exact design and implementation). The idea is not a new one and is very similar to 9P from Plan 9 by Bell Labs.
How it differs from "Everything is a file"
Unix has a concept of using file paths to represent "special files" that have some meaning beyond a regular file. For example, a device file is a reference to a device resource that looks like a file path.
With the "Everything is a file" concept provided by Unix-like systems, all sorts of devices, processes, and kernel parameters can be accessed as files in a regular filesystem. If you are on a Linux computer, you should try to cd
to /proc
, and see what's going on there.
Redox extends this concept to a much more powerful one. Since each "scheme provider" is free to interpret the path in its own way, new schemes can be created as needed for each type of resource. This way USB devices don't end up in a "filesystem", but a protocol-based scheme like EHCI:
. It is not necessary for the file system software to understand the meaning of a particular URL, or to give a special file some special properties that then become a fixed file system convention.
Real files are accessible through a scheme called file:
, which is widely used and specified in RFC 1630 and RFC 1738.
Redox schemes are flexible enough to be used in many circumstances, with each scheme provider having full flexibility to define its own path conventions and meanings, and only the programs that wish to take advantage of those meanings need to understand them.
Documentation about this design
Stiching it All Together
The "URL, scheme, resource" model is simply a unified interface for efficient inter-process communication. URLs are simply resource descriptors. Schemes are simply resource types, provided by scheme managers.
A quick, ugly diagram would look like this:
/
| +=========+
| | Program |
| +=========+
| +--------------------------------------+ ^ | write
| | | | |
User space < +----- URL -----+ | read | v
| | +-----------+ | open +---------+ open | +----------+
| | | Scheme |-|---+ +------->| Scheme |------------>| Resource |
| | +-----------+ | | | +---------+ +----------+
| | +-----------+ | | |
| | | Reference | | | |
| | +-----------+ | | |
\ +---------------+ | |
resolve | |
/ v |
| +=========+
Kernel space < | Resolve |
| +=========+
\
TODO improve diagram
Scheme Operation
A kernel scheme is implemented directly in the kernel. A userspace scheme is typically implemented by a daemon.
A scheme is created in the root scheme and listens for requests using the event scheme.
Root Scheme
The root
scheme is a special scheme provided by the kernel. It acts as the container for all other scheme names. The root scheme is referenced as ":", so when creating a new scheme, the scheme provider calls File::create(":myscheme")
. The file descriptor that is returned by this operation is a message passing channel between the scheme provider and the kernel. File operations performed by a regular program are translated by the kernel into message packets that the scheme provider reads and responds to, using this file descriptor.
Event Scheme
The event:
scheme is a special scheme provided by the kernel that allows a scheme provider or other program to listen for events occurring on a file descriptor. A more detailed explanation of the event:
scheme is here.
Note that very simple scheme providers do not use the event:
scheme. However, if a scheme can receive requests or events from more than one source, the event:
scheme makes it easy for the daemon (scheme provider) to block until something (an event) happens, do some work, then block again until the next event.
Daemons and Userspace Scheme Providers
A daemon is a program, normally started during system initialization. It runs with root permissions. It is intended to run continuously, handling requests and other relevant events. On some operating systems, daemons are automatically restarted if they exit unexpectedly. Redox does not currently do this but is likely to do so in the future.
On Redox, a userspace scheme provider is a typically a daemon, although it doesn't have to be. The scheme provider informs the kernel that it will provide the scheme by creating it, e.g. File::create(":myscheme")
will create the scheme "myscheme:". Notice that the name used to create the scheme starts with ":", indicating that it is a new entry in the root scheme. Since it is created in the root scheme, the kernel knows that it is a new scheme, as named schemes are the only thing that can exist in the root scheme.
Namespaces
At the time a regular program is started, it becomes a process, and it exists in a namespace. The namespace is a container for all the schemes, files and directories that a process can access. When a process starts another program, the namespace
is inherited, so a new process can only access the schemes, files and directories that its parent process had available. If a parent process wants to limit (sandbox) a child process, it would do so as part of creating the child process.
Currently, Redox starts all processes in the "root" namespace. This will be corrected in the future, sandboxing all user programs so most schemes and system resources are hidden.
Redox also provides a null
namespace. A process that exists in the null
namespace cannot open files or schemes by name, and can only use file descriptors that are already open. This is a security mechanism, mostly used to by daemons running with root
permission to prevent themselves from being hijacked into opening things they should not be accessing. A daemon will typically open its scheme and any resources it needs during its initialization, then it will ask the kernel to place it in the null
namespace so no further resources can be opened.
Providing a Scheme
To provide a scheme, a program performs the following steps.
- Create the scheme, obtaining a file descriptor -
File::create(":myscheme")
- Open a file descriptor for each resource that is required to provide the scheme's services, e.g.
File::open("irq:{irq-name}")
- Open a file descriptor for a timer if needed -
File::open("time:{timer_type}")
- Open a file descriptor for the event scheme (if needed) -
File::open("event:")
- Move to the null namespace to prevent any additional resources from being accessed -
setrens(0,0)
- Write to the
event:
file descriptor to register each of the file descriptors the provider will listen to, including the scheme file descriptor -event_fd.write(&Event{fd, ...})
Then, in a loop:
- Block, waiting for an event to read. For simple schemes, the scheme provider would not use this mechanism, it would simply do a blocking read of its scheme file descriptor.
- Read the event to determine (based on the file descriptor included in the event) if it is a timer, a resource event, or a scheme request.
- If it's a resource event, e.g. indicating a device interrupt, perform the necessary actions such as reading from the device and queuing the data for the scheme.
- If it's a scheme event, read a request packet from the scheme file descriptor and call the "handler".
- The request packet will indicate if it's an
open
,read
,write
, etc. on the scheme. - An
open
will include the name of the item to be opened. This can be parsed by the scheme provider to determine the exact resource the requestor wants to access. The scheme will allocate a handle for the resource, with a numbered descriptor. Descriptor numbers are in the range 0 to usize::MAX - 4096, leaving the upper 4096 values as internal error codes. These descriptors are used by the scheme provider to look up thehandle
data structure it uses internally for the resource. The descriptors are typically allocated sequentially, but a scheme provider could return a pointer to the handle data structure if it so chooses. - Note that the descriptor returned from an
open
request is not the same as the file descriptor returned to the client program. The kernel maps between the client's (process id,fd
number) and the scheme provider's (process id,handle
number). - A
read
orwrite
, etc., will be handled by the scheme, using thehandle
number to look up the information associated with the resource. The operation will be performed, or queued to be performed. If the request can be handled immediately, a response is sent back on the scheme file descriptor, matched to the original request.
- The request packet will indicate if it's an
- After all requests have been handled, loop through all
handles
to determine if any queued requests are now complete. A response is sent back on the scheme file descriptor for each completed request, matched to that request. - Set a timer if appropriate, to enable handling of device timeouts, etc. This is performed as a
write
operation on the timer file descriptor.
Kernel Actions
The kernel performs the following actions in support of the scheme.
- Any special resources required by a scheme provider are accessed as file operations on some other scheme. The kernel handles access to resources as it would for any other scheme.
- Regular file operations from user programs are converted by the kernel to request messages to the schemes. The kernel maps the user program's file descriptor to a scheme and a handle id provided by the scheme during the open operation, and places them in a packet.
- If the user program is performing a blocking read or write, the user program is suspended.
- The kernel sends event packets on the scheme provider's
event:
file descriptor, waking the blocked scheme provider. Each event packet indicates whether it is the scheme or some other resource, using the file descriptor obtained by the scheme provider during its initialization. - When the scheme provider reads from its scheme file descriptor, it receives the packets the kernel created describing the client request and handles them as described above.
- When the scheme provider sends a response packet, the kernel maps the response to a return value from the user program's file operation.
- When a blocking read or write is completed, the user program is marked ready to run, and the kernel will place it in the run queue.
Event Scheme
The event:
scheme is a special scheme that is central to the operation of device drivers, schemes and other programs that receive events from multiple sources. It's like a "clearing house" for activity on multiple file descriptors. The daemon or client program performs a read
operation on the event:
scheme, blocking until an event happens. It then examines the event to determine what file descriptor is active, and performs a non-blocking read of the active file descriptor. In this way, a program can have many sources to read from, and rather than blocking on one of those sources while another might be active, the program blocks only on the event:
scheme, and is unblocked if any one of the other sources become active.
The event:
scheme is conceptually similar to Linux's epoll mechanism.
What is a Blocking Read
For a regular program doing a regular read of a regular file, the program calls read
, providing an input buffer, and when the read
call returns, the data has been placed into the input buffer. Behind the scenes, the system receives the read
request and suspends the program, meaning that the program is put aside while it waits for something to happen. This is very convenient if the program has nothing to do while it waits for the read
to complete. However, if the thing the program is reading from might take a long time, such as a slow device, a network connection or input from the user, and there are other things for the program to do, such as updating the screen, performing a blocking read can prevent handling these other activities in a timely manner.
Non-blocking Read
To allow reading from multiple sources without getting stuck waiting for any particular one, a program can open a URL using the O_NONBLOCK
flag. If data is ready to be read, the system immediately copies the data to the input buffer and returns normally. However, if data is not ready to be read, the read
operation returns an error of type EAGAIN
, which indicates that the program should try again later.
Now your program can scan many file descriptors, checking if any of them have data available to read. However, if none have any data, you want your program to block until there is something to do. This is where the event:
scheme comes in.
Using the Event Scheme
The purpose of the event:
scheme is to allow the daemon or client program to receive a message on the event_file
, to inform it that some other file descriptor is ready to be read. The daemon reads from the event_file
to determine which other file descriptor is ready. If no other descriptor is ready, the read
of the event_file
will block, causing the daemon to be suspended until the event scheme indicates some other file descriptor is ready.
Before setting up the event scheme, you should open
all the other resources you will be working with, but set them to be non-blocking. E.g. if you are a scheme provider, open your scheme in non-blocking mode,
#![allow(unused)] fn main() { let mut scheme_file = OpenOptions::new() .create(true) .read(true) .write(true) .custom_flags(syscall::O_NONBLOCK as i32) .open(":myscheme") .expect("mydaemon: failed to create myscheme: scheme"); }
The first step in using the event scheme is to open a connection to it. Each program will have a connection to the event scheme that is unique, so no path name is required, only the name of the scheme itself.
#![allow(unused)] fn main() { let event_file = File::open("event:"); // you actually need to open it read/write }
Next, write messages to the event scheme, one message per file descriptor that the event:
scheme should monitor. A message is in the form of a syscall::data::Event
struct.
#![allow(unused)] fn main() { use syscall::data::Event; let _ = event_file.write(&Event{ id: scheme_file.as_raw_fd(), ... }); // write one message per file descriptor }
Note that timers in Redox are also handled via a scheme, so if you will be using a timer, you will need to open the timer:
scheme, and include that file descriptor among the ones your event_file
should listen to.
Once your setup of the event:
scheme is complete, you begin your main loop.
- Perform a blocking read on the
event:
file descriptor.event_file.read(&mut event_buf);
- When an event, such as data becoming available on a file descriptor, occurs, the
read
operation on theevent_file
will complete. - Look at the
event_buf
to see which file descriptor is active. - Perform a non-blocking read on that file descriptor.
- Do the appropriate processing.
- If you are using a timer, write to the timer file descriptor to tell it when you want an event.
- Repeat.
Non-blocking Write
Sometimes write operations can take time, such as sending a message synchronously or writing to a device with a limited buffer. The event:
scheme allows you to listen for write file descriptors to become unblocked. If a single file descriptor is opened in read-write mode, your program will need to register with the event:
scheme twice, once for reading and once for writing.
Implementing Non-blocking Reads in a Scheme
If your scheme supports non-blocking reads by clients, you will need to include some machinery to work with the event:
scheme on your client's behalf.
- Wait for an event that indicates activity on your scheme.
event_file.read(&mut event_buf);
- Read a packet from your scheme file descriptor containing the request from the client program.
scheme_file.read(&mut packet)
- The packet contains the details of which file descriptor is being read, and where the data should be copied.
- If the client is performing a
read
that would block, then queue the client request, and return theEAGAIN
error, writing the error response to your scheme file descriptor. - When data is available to read, send an event by writing a special packet to your scheme, indicating the handle id that is active.
#![allow(unused)] fn main() { scheme_file.write(&Packet { a: syscall::number::SYS_FEVENT, b: handle_id, ... }); }
- When routing this response back to the client, the kernel will recognize it as an event message, and post the event on the client's
event_fd
, if one exists. - The scheme provider does not know whether the client has actually set up an
event_fd
. The scheme provider must send the event "just in case". - If an event has already been sent, but the client has not yet performed a
read
, the scheme should not send additional events. In correctly coded clients, extra events should not cause problems, but an effort should be made to not send unnecessary events. Be wary, however, as race conditions can occur where you think an extra event is not required but it actually is.
An example.
Enough theory! Time for an example.
We will implement a scheme which holds a vector. The scheme will push elements
to the vector when it receives writes, and pop them when it is read. Let's call
it vec:
.
The complete source for this example can be found at redox-os/vec_scheme_example.
TODO the example has not been saved to the repo
Setup
In order to build and run this example in a Redox environment, you'll need to
be set up to compile the OS from source. The process for getting a program
included in a local Redox build is laid out in
Including Programs. Pause here and follow the helloworld
example in that guide if
you want to get this example running.
This example assumes that vec
was used as the name of the crate instead of
helloworld
. The crate should therefore be located at
cookbook/recipes/vec/source
.
Modify the Cargo.toml
for the vec
crate so that it looks something like
this:
[package]
name = "vec"
version = "0.1.0"
edition = "2018"
[[bin]]
name = "vec_scheme"
path = "src/scheme.rs"
[[bin]]
name = "vec"
path = "src/client.rs"
[dependencies]
redox_syscall = "^0.2.6"
Notice that there are two binaries here. We'll need another program to interact with
our scheme, since CLI tools like cat
use more operations than we strictly
need to implement for our scheme. The client uses only the standard library.
The Scheme Daemon
Create src/scheme.rs
in the crate. Start by use
ing a couple of symbols.
#![allow(unused)] fn main() { use std::cmp::min; use std::fs::File; use std::io::{Read, Write}; use syscall::Packet; use syscall::scheme::SchemeMut; use syscall::error::Result; }
We start by defining our mutable scheme struct, which will implement the
SchemeMut
trait and hold the state of the scheme.
#![allow(unused)] fn main() { struct VecScheme { vec: Vec<u8>, } impl VecScheme { fn new() -> VecScheme { VecScheme { vec: Vec::new(), } } } }
Before implementing the scheme operations on our scheme struct, let's breifly
discuss the way that this struct will be used. Our program (vec_scheme
) will
create the vec
scheme by opening the corresponding scheme handler in the root
scheme (:vec
). Let's implement a main()
that intializes our scheme struct
and registers the new scheme:
fn main() { let mut scheme = VecScheme::new(); let mut handler = File::create(":vec") .expect("Failed to create the vec scheme"); }
When other programs open/read/write/etc against our scheme, the Redox kernel will make those requests available to our program via this scheme handler. Our scheme will read that data, handle the requests, and send responses back to the kernel by writing to the scheme handler. The kernel will then pass the results of operations back to the caller.
fn main() { // ... let mut packet = Packet::default(); loop { // Wait for the kernel to send us requests let read_bytes = handler.read(&mut packet) .expect("vec: failed to read event from vec scheme handler"); if read_bytes == 0 { // Exit cleanly break; } // Scheme::handle passes off the info from the packet to the individual // scheme methods and writes back to it any information returned by // those methods. scheme.handle(&mut packet); handler.write(&packet) .expect("vec: failed to write response to vec scheme handler"); } }
Now let's deal with the specific operations on our scheme. The
scheme.handle(...)
call dispatches requests to these methods, so that we
don't need to worry about the gory details of the Packet
struct.
In most Unix systems (Redox included!), a program needs to open a file before it
can do very much with it. Since our scheme is just a "virtual filesystem",
programs call open
with the path to the "file" they want to interact with
when they want to start a conversation with our scheme.
For our vec scheme, let's push whatever path we're given to the vec:
#![allow(unused)] fn main() { impl SchemeMut for VecScheme { fn open(&mut self, path: &str, _flags: usize, _uid: u32, _gid: u32) -> Result<usize> { self.vec.extend_from_slice(path.as_bytes()); Ok(0) } } }
Say a program calls open("vec:/hello")
. That call will work it's way through
the kernel and end up being dispatched to this function through our
Scheme::handle
call.
The usize that we return here will be passed back to us as the id
parameter of
the other scheme operations. This way we can keep track of different open files.
In this case, we won't make a distinction between two different programs talking
to us and simply return zero.
Similarly, when a process opens a file, the kernel returns a number (the file
descriptor) that the process can use to read and write to that file. Now let's
implement the read and write operations for VecScheme
.
#![allow(unused)] fn main() { impl SchemeMut for VecScheme { // ... // Fill up buf with the contents of self.vec, starting from self.buf[0]. // Note that this reverses the contents of the Vec. fn read(&mut self, _id: usize, buf: &mut [u8]) -> Result<usize> { let num_written = min(buf.len(), self.vec.len()); for b in buf { if let Some(x) = self.vec.pop() { *b = x; } else { break; } } Ok(num_written) } // Simply push any bytes we are given to self.vec fn write(&mut self, _id: usize, buf: &[u8]) -> Result<usize> { for i in buf { self.vec.push(*i); } Ok(buf.len()) } } }
Note that each of the methods of the SchemeMut
trait provide a default
implementation. These will all return errors since they are essentially
unimplemented. There's one more method we need to implement in order to prevent
errors for users of our scheme:
#![allow(unused)] fn main() { impl SchemeMut for VecScheme { // ... fn close(&mut self, _id: usize) -> Result<usize> { Ok(0) } } }
Most languages' standard libraries call close automatically when a file object is destroyed, and Rust is no exception.
To see all the possitble operations on schemes, check out the API docs.
TODO There is no scheme documentation at this link
A Simple Client
As mentioned earlier, we need to create a very simple client in order to use our
scheme, since some command line tools (like cat
) use operations other than
open, read, write, and close. Put this code into src/client.rs
:
use std::fs::File; use std::io::{Read, Write}; fn main() { let mut vec_file = File::open("vec:/hi") .expect("Failed to open vec file"); vec_file.write(b" Hello") .expect("Failed to write to vec:"); let mut read_into = String::new(); vec_file.read_to_string(&mut read_into) .expect("Failed to read from vec:"); println!("{}", read_into); // olleH ih/ }
We simply open some "file" in our scheme, write some bytes to it, read some bytes from it, and then spit those bytes out on stdout. Remember, it doesn't matter what path we use, since all our scheme does is add that path to the vec. In this sense, the vec scheme implements a global vector.
Running the Scheme
Since we've already set up the program to build and run in the redox VM, simply run:
make r.scheme
make image
make qemu
We'll need multiple terminal windows open in the QEMU window for this step.
Notice that both binaries we defined in our Cargo.toml
can now be found in
file:/bin
(vec_scheme
and vec
). In one terminal window, run
sudo vec_scheme
. A program needs to run as root in order to register a new
scheme. In another terminal, run vec
and observe the output.
Exercises for the reader
- Make the vec scheme print out something whenever it gets events. For example, print out the user and group ids of the user who tries to open a file in the scheme.
- Create a unique vec for each opened file in your scheme. You might find a hashmap useful for this.
- Write a scheme that can run code for your favorite esoteric programming language.
Programs and Libraries
Redox is a general-purpose operating system, thus can run any program.
Some programs are interpreted by a runtime for the program's language, such as a script running in the Ion shell or a Python program. Others are compiled into CPU instructions that run on a particular operating system (Redox) and specific hardware (e.g. x86 compatible CPU in 64-bit mode).
- In Redox, the binaries use the standard ELF ("Executable and Linkable Format") format.
Programs could directly invoke Redox syscalls, but most call library functions that are higher-level and more comfortable to use. You link your program with the libraries it needs.
- Most C/C++ programs call functions in a C Standard Library (libc) such as
fopen
. - Redox includes a Rust implementation of the standard C library called relibc. This is how programs such as Git and Python can run on Redox. relibc has some POSIX compatibility.
- Rust programs implicitly or explicitly call functions in the Rust standard library.
- The Rust libstd now includes an implementation of its system-dependent parts (such as file access and setting environment variables) for Redox, in
src/libstd/sys/redox
. Most of libstd works in Redox, so many terminal-based Rust programs can be compiled for Redox.
The Redox Cookbook package system includes recipes (software ports) for compiling C, C++ and Rust programs into Redox binaries.
The porting of programs on Redox is done case-by-case, if a program just need small patches, the programmer can modify the Rust crate source code or add .patch
files on the recipe folder, but if big or dirty patches are needed, Redox create a fork of it on GitLab and rebase for a while in the redox
branch of the fork (some Redox forks use branches for different versions).
Components of Redox
Redox is made up of several discrete components.
Core
- ion - shell
- redoxfS - filesystem
- kernel
- drivers
- orbital - desktop environment
Orbital subcomponents
- orbterm - terminal
- orbdata - images, fonts, etc.
- orbaudio - audio
- orbutils - bunch of applications
- orblogin - login prompt
- orbtk - cross-platform Rust GUI toolkit, similar to GTK
- orbfont - font rendering library
- orbclient - display client
- orbimage - image rendering library
Default Programs
- sodium - text editor
- orbutils
- background
- browser
- calculator
- character map
- editor
- file manager
- launcher
- viewer
GUI
The desktop environment of Redox (Orbital) is provided by a set of programs that run in user-space.
-
Orbital - The Orbital display and window manager sets up the
orbital:
scheme, manages the display, and handles requests for window creation, redraws, and event polling. -
Launcher - The launcher multi-purpose program that scans the applications in the
/apps/
directory and provides the following services:- Called Without Arguments - A taskbar that displays icons for each application
- Called With Arguments - An application chooser that opens a file in a matching program.
- If one application is found that matches, it will be opened automatically
- If more than one application is found, a chooser will be shown
Programs
The following are GUI utilities that can be found in the /apps/
directory.
- Calculator - A calculator that provides similar functionality to the
calc
program. - Editor - A simple editor that is similar to Notepad.
- File Browser - A file browser that displays icons, names, sizes, and details for files. It uses the
launcher
command to open files when they are clicked. - Image Viewer - A simple image viewer.
- Sodium - A vi-like editor that provides syntax highlighting.
- Terminal Emulator - An ANSI terminal emulator that launches
sh
by default.
Ion
Ion is the underlying library for shells and command execution in Redox, as well as the default shell. Ion has it's own manual, which you can find here.
1. The default shell in Redox
What is a shell?
A shell is a layer around operating system kernel and libraries, that allows users to interact with operating system. That means a shell can be used on any operating system (Ion runs on both Linux and Redox) or implementation of a standard library as long as the provided API is the same. Shells can either be graphical (GUI) or command-line (CLI).
Text shells
Text shells are programs that provide interactive user interface with an operating system. A shell reads from users as they type and performs operations according to the input. This is similar to read-eval-print loop (REPL) found in many programming languages (e.g. Python).
Typical *nix shells
Probably the most famous shell is Bash, which can be found in vast majority of Linux distributions, and also in macOS (formerly known as Mac OS X). On the other hand, FreeBSD uses tcsh by default.
There are many more shell implementations, but these two form the base of two fundamentally different sets:
- Bourne shell syntax (bash, sh, zsh)
- C shell syntax (csh, tcsh)
Of course these two groups are not exhaustive; it is worth mentioning at least the fish shell and xonsh. These shells are trying to abandon some features of old-school shell to make the language safer and more sane.
Fancy features
Writing commands without any help from the shell would be very exhausting and impossible to use for everyday work. Therefore, most shells (including Ion of course!) include features such as command history, autocompletion based on history or man pages, shortcuts to speed-up typing, etc.
2. A scripting language
Ion can also be used to write simple scripts for common tasks or system configuration after startup. It is not meant as a fully-featured programming language, but more like a glue to connect other programs together.
Relation to terminals
Early terminals were devices used to communicate with large computer systems like IBM mainframes. Nowadays Unix operating systems usually implement so called virtual terminals (tty stands for teletypewriter ... whoa!) and terminal emulators (e.g. xterm, gnome-terminal).
Terminals are used to read input from a keyboard and display textual output of the shell and other programs running inside it. This means that a terminal converts key strokes into control codes that are further used by the shell. The shell provides the user with a command line prompt (for instance: user name and working directory), line editing capabilities (Ctrl + a,e,u,k...), history, and the ability to run other programs (ls, uname, vim, etc.) according to user's input.
TODO: In Linux we have device files like /dev/tty
, how is this concept handled in Redox?
Shell
When ion is called without "-c", it starts a main loop,
which can be found inside Shell.execute()
.
#![allow(unused)] fn main() { self.print_prompt(); while let Some(command) = readln() { let command = command.trim(); if !command.is_empty() { self.on_command(command, &commands); } self.update_variables(); self.print_prompt(); } }
self.print_prompt();
is used to print the shell prompt.
The readln()
function is the input reader. The code can be found in crates/ion/src/input_editor
.
The documentation about trim()
can be found here.
If the command is not empty, the on_command
method will be called.
Then, the shell will update variables, and reprint the prompt.
#![allow(unused)] fn main() { fn on_command(&mut self, command_string: &str, commands: &HashMap<&str, Command>) { self.history.add(command_string.to_string(), &self.variables); let mut pipelines = parse(command_string); // Execute commands for pipeline in pipelines.drain(..) { if self.flow_control.collecting_block { // TODO move this logic into "end" command if pipeline.jobs[0].command == "end" { self.flow_control.collecting_block = false; let block_jobs: Vec<Pipeline> = self.flow_control .current_block .pipelines .drain(..) .collect(); match self.flow_control.current_statement.clone() { Statement::For(ref var, ref vals) => { let variable = var.clone(); let values = vals.clone(); for value in values { self.variables.set_var(&variable, &value); for pipeline in &block_jobs { self.run_pipeline(&pipeline, commands); } } }, Statement::Function(ref name, ref args) => { self.functions.insert(name.clone(), Function { name: name.clone(), pipelines: block_jobs.clone(), args: args.clone() }); }, _ => {} } self.flow_control.current_statement = Statement::Default; } else { self.flow_control.current_block.pipelines.push(pipeline); } } else { if self.flow_control.skipping() && !is_flow_control_command(&pipeline.jobs[0].command) { continue; } self.run_pipeline(&pipeline, commands); } } } }
First, on_command
adds the commands to the shell history with self.history.add(command_string.to_string(), &self.variables);
.
Then the script will be parsed. The parser code is in crates/ion/src/peg.rs
.
The parse will return a set of pipelines, with each pipeline containing a set of jobs.
Each job represents a single command with its arguments.
You can take a look in crates/ion/src/peg.rs
.
#![allow(unused)] fn main() { pub struct Pipeline { pub jobs: Vec<Job>, pub stdout: Option<Redirection>, pub stdin: Option<Redirection>, } pub struct Job { pub command: String, pub args: Vec<String>, pub background: bool, } }
What Happens Next:
- If the current block is a collecting block (a for loop or a function declaration) and the current command is ended, we close the block:
- If the block is a for loop we run the loop.
- If the block is a function declaration we push the function to the functions list.
- If the current block is a collecting block but the current command is not ended, we add the current command to the block.
- If the current block is not a collecting block, we simply execute the current command.
The code blocks are defined in crates/ion/src/flow_control.rs
.
pub struct CodeBlock {
pub pipelines: Vec<Pipeline>,
}
The function code can be found in crates/ion/src/functions.rs
.
The execution of pipeline content will be executed in run_pipeline()
.
The Command class inside crates/ion/src/main.rs
maps each command with a description and a method
to be executed.
For example:
#![allow(unused)] fn main() { commands.insert("cd", Command { name: "cd", help: "Change the current directory\n cd <path>", main: box |args: &[String], shell: &mut Shell| -> i32 { shell.directory_stack.cd(args, &shell.variables) }, }); }
cd
is described by "Change the current directory\n cd <path>"
, and when called the method
shell.directory_stack.cd(args, &shell.variables)
will be used. You can see its code in crates/ion/src/directory_stack.rs
.
System Tools
Coreutils
Coreutils is a collection of basic command line utilities included with Redox (or with Linux, BSD, etc.). This includes programs like ls
, cp
, cat
and various other tools necessary for basic command line interaction.
Redox's coreutils aim to be more minimal than, for instance, the GNU coreutils included with most Linux systems.
Binutils
Binutils contains utilities for manipulating binary files. Currently Redox's binutils includes strings
, disasm
, hex
, and hexdump
Extrautils
Some additional command line tools are included in extrautils, such as less
, grep
, and dmesg
.
Contain
This program provides containers (namespaces) on Redox.
acid
The stress test suite of Redox to detect crashes/bugs.
resist
The POSIX test suite of Redox to see how much % the system is compliant to the POSIX specification (more means better software portability).
Developing for Redox Overview
If you are considering contributing to Redox, or if you want to use Redox as a platform for something else, this part of the book will give you the details you need to be successful. Please join us on the Redox Chat and have a look at CONTRIBUTING.
Please feel free to jump around as you read through this part of the book and please try out some of the examples.
The following topics are covered here:
The Build Process
This chapter will cover the advanced build process of Redox.
Advanced Build
In this section, we provide the gory details that may be handy to know if you are contributing to or developing for Redox.
Setting up your Environment
If you intend on contributing to Redox or its subprojects, please read Creating Proper Pull Requests so you understand our use of forks, and set up your repository appropriately.
Although it is strongly recommended you use the Building Redox process or Podman Build instead of the process described here, advanced users may accomplish the same as the bootstrap.sh script with the following steps, which are provided by way of example for Pop!_OS/Ubuntu/Debian. For other platforms, have a look at the file bootstrap.sh to help determine what packages to install for your distro.
Be forewarned, for distros other than Pop!_OS/Ubuntu/Debian, neither bootstrap.sh
nor this document are fully maintained, as the recommended environment is Podman. The core redox-os developers use Pop!_OS to build Redox.
The steps to perform are
- Clone the repository
- Install the Pre-requisite packages
- Install Rust
- Adjust your Configuration Settings
- Build the system
Clone the repository
Create a directory and clone the repository.
mkdir -p ~/tryredox
cd ~/tryredox
git clone https://gitlab.redox-os.org/redox-os/redox.git --origin upstream --recursive
cd redox
git submodule update --recursive --init
Please be patient, this can take minutes to hours depending on the hardware and network you're running it on.
In addition to installing the various packages needed for building Redox, bootstrap.sh and podman_bootstrap.sh both clone the repository, so if you used either script, you have completed Step 1.
Install Pre-Requisite Packages and Emulators
If you cloned the source tree before running bootstrap.sh, you can use:
cd ~/tryredox/redox
./bootstrap.sh -d
to install the package dependencies without re-fetching any source. If you wish to install the dependencies yourself, some examples are given below.
Pop!_OS/Ubuntu/Debian Users
Install the package dependencies:
sudo apt-get install git autoconf autopoint bison build-essential cmake curl file flex genisoimage git gperf libc6-dev-i386 libexpat-dev libfuse-dev libgmp-dev libhtml-parser-perl libpng-dev libtool libjpeg-dev libvorbis-dev libsdl2-ttf-dev libosmesa6-dev m4 nasm pkg-config po4a syslinux-utils texinfo libsdl1.2-dev ninja-build meson python3-mako
- If you want to use QEMU, run:
sudo apt-get install qemu-system-x86 qemu-kvm
- If you want to use VirtualBox, run:
sudo apt-get install virtualbox
Fedora Users
If you are unable to use Podman Build, you can attempt to install the prerequisite packages yourself. Some of them are listed here.
sudo dnf install git file autoconf vim bison flex genisoimage gperf glibc-devel.i686 expat expat-devel fuse-devel fuse3-devel gmp-devel perl-HTML-Parser libpng-devel libtool libjpeg-turbo-devel libvorbis-devel SDL2_ttf-devel mesa-libOSMesa-devel m4 nasm po4a syslinux texinfo sdl12-compat-devel ninja-build meson python3-mako make gcc gcc-c++ openssl patch automake perl-Pod-Html perl-FindBin gperf curl gettext-devel perl-Pod-Xhtml pkgconf-pkg-config cmake
- If you want to use QEMU, run:
sudo dnf install qemu-system-x86 qemu-kvm
- If you want to use VirtualBox, install from VirtualBox Linux Downloads page.
MacOS Users using MacPorts:
If you are unable to use Podman Build, you can attempt to install the prerequisite packages yourself. Some of them are listed here.
sudo port install git coreutils findutils gcc49 gcc-4.9 nasm pkgconfig osxfuse x86_64-elf-gcc cmake ninja po4a texinfo
- If you want to use QEMU, run:
sudo port install qemu qemu-system-x86_64
- If you want to use VirtualBox, run:
sudo port install virtualbox
If you have some problem, try to install this Perl module:
cpan install HTML::Entities
MacOS Users using Homebrew:
If you are unable to use Podman Build, you can attempt to install the prerequisite packages yourself. Some of them are listed here.
brew install git automake bison gettext libtool make nasm gcc@7 gcc-7 pkg-config cmake ninja po4a macfuse findutils textinfo
and
brew install redox-os/gcc_cross_compilers/x86_64-elf-gcc
- If you want to use QEMU, run:
brew install qemu qemu-system-x86_64
- If you want to use VirtualBox, run:
brew install virtualbox
If you have some problem, try to install this Perl module:
cpan install HTML::Entities
Install Rust Stable And Nightly
Install Rust, make the nightly version your default toolchain, then list the installed toolchains:
curl https://sh.rustup.rs -sSf | sh
then
source ~/.cargo/env
rustup default nightly
rustup toolchain list
cargo install --force --version 0.1.1 cargo-config
NOTE: ~/.cargo/bin
has been added to your PATH for the running session.
The line . "$HOME/.cargo/env
(equivalent to source ~/.cargo/env
) will have been added to your shell start-up file, ~/.bashrc
, but you may wish to add it elsewhere or modify it according to your own environment.
Prefix
The tools that build Redox are specific to each CPU architecture. These tools are located in the directory prefix
, in a subdirectory named for the architecture, e.g. prefix/x86_64-unknown-redox
. If you have problems with these tools, you can remove the subdirectory or even the whole prefix
directory, which will cause the tools to be re-downloaded or rebuilt. The variable PREFIX_BINARY
in mk/config.mk
controls whether they are downloaded or built.
Cookbook
The Cookbook system is an essential part of the Redox build system. Each Redox component package is built and managed by the Cookbook toolset. The variable REPO_BINARY
in mk/config.mk
controls if the recipes are compiled from sources or use binary packages from Redox CI server, read the section REPO_BINARY for more details. See Including Programs in Redox for examples of using the Cookbook toolset. If you will be developing recipes to include in Redox, it is worthwhile to have a look at the tools in the cookbook
directory.
Creating a Build Environment Shell
If you are working on specific components of the system, and will be using some of the tools in the cookbook
directory and bypassing make
, you may wish to create a build environment shell. This shell includes the prefix
tools in your path. You can do this with:
make env
This command also works with Podman Build, creating a shell in Podman and setting PATH to include the necessary build tools.
Updating The Sources
If you want to update the Redox build system or if some of the recipes have changed, you can update those parts of the system with make pull
. However, this will not update source for the recipes.
cd ~/tryredox/redox
make pull
If you want to update the source for the recipes, use make rebuild
, or remove the file $(BUILD)/fetch.tag
then use make fetch
.
Changing the filesystem size and contents
You can modify the size and contents of the filesystem for emulation and livedisk as described in Configuration Settings.
Next steps
Once this is all set up, we can finally Compile! See Compiling The Entire Redox Project.
Advanced Podman Build
To make the Redox build process more consistent across platforms, we are using Rootless Podman for major parts of the build. The basics of using Podman are described here. This chapter provides a detailed discussion, including tips, tricks and troubleshooting, as well as some extra detail for those who might want to leverage or improve Redox's use of Podman.
Build Environment
-
Environment and Command Line Variables, other than ARCH, CONFIG_NAME and FILESYSTEM_CONFIG, are not passed to the part of
make
that is done in Podman. You must set any other config variables, e.g.REPO_BINARY
, in .config and not on the command line or in your environment. -
If you are building your own software to include in Redox, and you need to install additional packages using
apt-get
for the build, follow Adding Ubuntu Packages to the Build.
Minimum Installation
Most of the packages required for the build are installed in the container as part of the build process. However, some packages need to be installed on the host computer. You may also need to install an emulator such as QEMU. For most Linux distros, this is done for you in podman_bootstrap.sh
, but you can do a minimum install by following the instructions below.
Note that the Redox filesystem parts are merged using FUSE. podman_bootstrap.sh
installs libfuse
for most platforms, if it is not already included. If you have problems with the final assembly of Redox, check that libfuse
is installed and you are able to use it.
Pop!_OS
sudo apt-get install podman
Ubuntu
sudo apt-get install podman curl git make libfuse-dev
Arch Linux
sudo pacman -S --needed git podman fuse
Fedora
sudo dnf install podman
build/container.tag
The building of the image is controlled by the tag file build/container.tag
. If you run make all
with PODMAN_BUILD=1
, the file build/container.tag
will be created after the image is built. This file tells make
that it can skip updating the image after the first time.
Many targets in the Makefiles mk/*.mk
include build/container.tag
as a dependency. If the tag file is missing, building any of those targets may trigger an image to be created, which can take some time.
When you move to a new working directory, if you want to save a few minutes, and you are confident that your image is correct and your poduser
home directory build/podman/poduser
is valid, you can do
make container_touch
This will create the file build/container.tag
without rebuilding the image. However, it will fail if the image does not exist. If it fails, just do a normal make
, it will create the container when needed.
Cleaning Up
To remove the base image, any lingering containers, poduser
's home directory, including the Rust install, and build/container.tag
, use:
make container_clean
To check that everything has been removed,
podman ps -a
podman images
will show any remaining images or containers. If you need to do further cleanup,
podman system reset
will remove all images and containers. You still may need to remove build/container.tag
if you did not do make container_clean
.
In some rare instances, poduser
's home directory can have bad file permissions, and you may need to do
sudo chown -R `id -un`:`id -gn` build/podman
where `id -un`
is your User ID and `id -gn`
is your effective Group ID. Be sure to make container_clean
after that.
Note:
make clean
does not runmake container_clean
and will not remove the container image.- If you already did
make container_clean
, doingmake clean
could trigger an image build and a Rust install in the container. It invokescargo clean
on various components, which it must run in a container, since the build is designed to not require Cargo on your host machine. If you have Cargo installed on the host and in your PATH, you can usemake PODMAN_BUILD=0 clean
to clean without building a container.
Debugging your Build Process
If you are developing your own components and wish to do one-time debugging to determine what library you are missing in the Podman Build environment, the following instructions can help. Note that your changes will not be persistent. After debugging, you must Add your Libraries to the Build. With PODMAN_BUILD=1
, run the command:
make container_shell
This will start a bash
shell in the Podman container environment, as a normal user without sudo
privilege. Within that environment, you can build the Redox components with:
make repo
or, if you need to change ARCH
or CONFIG_NAME
,
./build.sh -a ARCH -c CONFIG_NAME repo
If you need root
privileges, while you are still running the above bash
shell, go to a separate Terminal or Console window on the host, and type:
cd ~/tryredox/redox
make container_su
You will then be running bash with root
privilege in the container, and you can use apt-get
or whatever tools you need, and it will affect the environment of the user-level container_shell
above. Do not precede the commands with sudo
as you are already root
. And remember that you are in an Ubuntu instance.
Note: Your changes will not persist once both shells have been exited.
Type exit
on both shells once you have determined how to solve your problem.
Adding Ubuntu Packages to the Build
This method can be used if you want to make changes/testing inside the Ubuntu container with make env
.
The default Containerfile, podman/redox-base-containerfile
, imports all required packages for a normal Redox build.
However, you cannot easily add packages after the base image is created. You must add them to your own Containerfile and rebuild the container image.
Copy podman/redox-base-containerfile
and add to the list of packages in the initial apt-get
.
cp podman/redox-base-containerfile podman/my-containerfile
nano podman/my-containerfile
...
xxd \
rsync \
MY_PACKAGE \
...
Make sure you include the continuation character \
at the end of each line except after the last package.
Then, edit .config, and change the variable CONTAINERFILE
to point to your Containerfile, e.g.
CONTAINERFILE?=podman/my-containerfile
If your Containerfile is newer than build/container.tag
, a new image will be created. You can force the image to be rebuilt with make container_clean
.
If you feel the need to have more than one image, you can change the variable IMAGE_TAG
in mk/podman.mk
to give the image a different name.
If you just want to install the packages temporarily, run make env
, open a new terminal tab/windows, run make container_su
and use apt install
on this tab/window.
Troubleshooting Podman
If you have problems setting Podman to rootless mode, use these commands:
(These commands were taken from the official Podman rootless wiki and Shortcomings of Rootless Podman, then it could be broken/wrong in the future, read the wiki to see if the commands match, we will try to update the method to work with everyone)
- Install
podman
,crun
,slirp4netns
andfuse-overlayfs
packages on your system. podman ps -a
- this command will show all your Podman containers, if you want to remove all of them, runpodman system reset
.- Take this step if necessary (if the Podman of your distribution use cgroup V2), you will need to edit the
containers.conf
file on/etc/containers
or your user folder at~/.config/containers
, change the lineruntime = "runc"
toruntime = "crun"
. - Execute
cat /etc/subuid
andcat /etc/subgid
to see user/group IDs (UIDs/GIDs) available for Podman.
If you don't want to edit the file, you can use this command:
sudo usermod --add-subuids 100000-165535 --add-subgids 100000-165535 yourusername
You can use the values 100000-165535
for your user, just edit the two text files, we recommend sudo nano /etc/subuid
and sudo nano /etc/subgid
, when you finish, press Ctrl+X to save the changes.
- After the change on the UID/GID values, execute this command:
podman system migrate
- If you have a network problem on the container, this command will allow connections on the port 443 (without root):
sudo sysctl net.ipv4.ip_unprivileged_port_start=443
- Hopefully, you have a working Podman build now (if you still have problems with Podman, check the Troubleshooting chapter or join us on the Redox Support room)
Let us know if you have improvements for Podman troubleshooting on Redox Dev room.
Summary of Podman-related Make Targets, Variables and Podman Commands
-
PODMAN_BUILD
- If set to 1 in .config, or in the environment, or on themake
command line, much of the build process takes place in Podman. -
CONTAINERFILE
- The name of the containerfile used to build the image. This file includes theapt-get
command that installs all the necessary packages into the image. If you need to add packages to the build, edit your own containerfile and change this variable to point to it. -
make build/container.tag
- If no container image has been built, build one. It's not necessary to do this, it will be done when needed. -
make container_touch
- If a container image already exists andpoduser
's home directory is valid, but there is no tag file, create the tag file so a new image is not built. -
make container_clean
- Remove the container image,poduser
's home directory and the tag file. -
make container_shell
- Start an interactive Podmanbash
shell in the same environment used bymake
; for debugging theapt-get
commands used during image build. -
make env
- Start an interactivebash
shell with theprefix
tools in your PATH. Automatically determines if this should be a Podman shell or a host shell, depending on the value ofPODMAN_BUILD
. -
make repo
or./build.sh -a ARCH -c CONFIG repo
- Used while in a Podman shell to build all the Redox component packages.make all
will not complete successfully, since part of the build process must take place on the host. -
podman exec --user=0 -it CONTAINER bash
- Use this command in combination withmake container_shell
ormake env
to getroot
access to the Podman build environment, so you can temporarily add packages to the environment.CONTAINER
is the name of the active container as shown bypodman ps
. For temporary, debugging purposes only. -
podman system reset
- Use this command whenmake container_clean
is not sufficient to solve problems caused by errors in the container image. It will remove all images, use with caution. If you are using Podman for any other purpose, those images will be deleted as well.
Gory Details
If you are interested in how we are able to use your working directory for builds in Podman, the following configuration details may be interesting.
We are using Rootless Podman's --userns keep-id
feature. Because Podman is being run Rootless, the container's root
user is actually mapped to your User ID on the host. Without the keep-id
option, a regular user in the container maps to a phantom user outside the container. With the keep-id
option, a user in the container that has the same User ID as your host User ID, will have the same permissions as you.
During the creation of the base image, Podman invokes Buildah to build the image. Buildah does not allow User IDs to be shared between the host and the container in the same way that Podman does. So the base image is created without keep-id
, then the first container created from the image, having keep-id
enabled, triggers a remapping. Once that remapping is done, it is reused for each subsequent container.
The working directory is made available in the container by mounting it as a volume. The Podman option:
--volume "`pwd`":$(CONTAINER_WORKDIR):Z
takes the directory that make
was started in as the host working directory, and mounts it at the location $CONTAINER_WORKDIR
, normally set to /mnt/redox
. The :Z
at the end of the name indicates that the mounted directory should not be shared between simultaneous container instances. It is optional on some Linux distros, and not optional on others.
For our invocation of Podman, we set the PATH environment variable as an option to podman run
. This is to avoid the need for our make
command to run .bashrc
, which would add extra complexity. The ARCH
, CONFIG_NAME
and FILESYSTEM_CONFIG
variables are passed in the environment to allow you to override the values in mk/config.mk
or .config
, e.g. by setting them on your make
command line or by using build.sh
.
We also set PODMAN_BUILD=0
in the environment, to ensure that the instance of make
running in the container knows not to invoke Podman. This overrides the value set in .config
.
In the Containerfile, we use as few RUN
commands as possible, as Podman commits the image after each command. And we avoid using ENTRYPOINT
to allow us to specify the podman run
command as a list of arguments, rather than just a string to be processed by the entrypoint shell.
Containers in our build process are run with --rm
to ensure the container is discarded after each use. This prevents a proliferation of used containers. However, when you use make container_clean
, you may notice multiple items being deleted. These are the partial images created as each RUN
command is executed while building.
Container images and container data is normally stored in the directory $HOME/.local/share/containers/storage
. The command:
podman system reset
removes that directory in its entirety. However, the contents of any volume are left alone.
Working with i686
The Redox Build system now supports building for multiple CPU architectures in the same directory tree. Building for i686
or aarch64
only requires that you set the ARCH
Make variable to the correct value. Normally, you would do this in .config, but you can also do this temporarily in the environment (export ARCH=i686
) or you can use build.sh.
FIRST TIME BUILD
Bootstrap Pre-Requisites And Fetch Sources
Follow the instructions for running bootstrap.sh to set up your environment - Building Redox or Podman Build.
Install Emulator Package
The i386 emulator is not installed by bootstrap.sh
. You can add it like this:
(Pop!_OS/Ubuntu/Debian)
sudo apt-get install qemu-system-i386
Config Values
Before your first build, be sure to set the ARCH
variable in .config to your architecture type, in this case i686
. You can change several other configurable settings, such as the filesystem contents, etc. See Configuration Settings.
Add packages to the filesystem.
You can add programs to the filesystem by following the instructions here.
ADVANCED USERS
For more details on the build process, please read Advanced Build.
Compiling The Entire Redox Project
Now we have:
- fetched the sources
- set the
ARCH
toi686
- selected a filesystem config, e.g.
desktop
- tweaked the settings to our liking
- possibly added our very own source/binary package to the filesystem
We are ready to build the entire Redox Operating System Image.
Building an image for emulation
cd ~/tryredox/redox
time make all
will make the target, e.g. build/i686/desktop/hardrive.img
, which you can run with an emulator. See Running Redox.
Building Redox Live CD/USB Image for i686
cd ~/tryredox/redox
time make live
will make the target build/i686/desktop/livedisk.iso
, which can be copied to a USB drive or CD for booting or installation. See Running Redox on real hardware.
Give it a while. Redox is big.
The two main targets, e.g. build/i686/desktop/harddrive.img
and
build/i686/desktop/livedisk.iso
, do the following:
- fetch some sources for the core tools from the redox-os gitlab servers, then builds them; as it progressively cooks each package, it fetches the respective package's source and builds it
- creates a few empty files holding different parts of the final image filesystem
- using the newly built core tools, it builds the non-core packages into one of those filesystem parts
- fills the remaining filesystem parts appropriately with stuff built by the core tools to help boot Redox
- merges the the different filesystem parts into a final Redox Operating System image ready to run in Qemu or be written to a USB drive or CD.
Cleaning Previous Build Cycles
Cleaning Intended For Rebuilding Core Packages And Entire System
When you need to rebuild core-packages like relibc, gcc and related tools, clean the entire previous build cycle with:
cd ~/tryredox/redox/
rm -rf prefix/i686-unknown-redox/relibc-install/ cookbook/recipes/gcc/{build,sysroot,stage*} build/i686/*/{harddrive.img,livedisk.iso}
Cleaning Intended For Only Rebuilding Non-Core Package(s)
If you're only rebuilding a non-core package, you can partially clean the previous build cycle just enough to force the rebuilding of the Non-Core Package:
cd ~/tryredox/redox/
rm build/i686/*/{fetch.tag,harddrive.img}
Running Redox
Running The Redox Desktop
To run Redox, do:
make qemu
This should open up a QEMU window, booting to Redox.
If it does not work, disable KVM with:
make qemu kvm=no
or:
make qemu iommu=no
If this doesn't work either, you should go open an issue.
Running The Redox Console Only
We disable to GUI desktop by passing "vga=no". The following disables the graphics support and welcomes you with the Redox console:
make qemu vga=no
It is advantageous to run the console in order to capture the output from the non-gui applications. It helps to debug applications and share the console captured logs with other developers in the redox community.
Running The Redox Console With A Qemu Tap For Network Testing
Expose Redox to other computers within a LAN. Configure Qemu with a "TAP" which will allow other computers to test Redox client/server/networking capabilities.
Join the Redox chat if this is something you are interested in pursuing.
Note
If you encounter any bugs, errors, obstructions, or other annoying things, please report the issue. Thanks!
Working with AArch64/Arm64
The Redox Build system now supports building for multiple CPU architectures in the same directory tree. Building for i686
or aarch64
only requires that you set the ARCH
Make variable to the correct value. Normally, you would do this in .config, but you can also do this temporarily in the environment (export ARCH=aarch64
) or you can use build.sh.
AArch64 has limited support in this release (0.8.0), proceed at your own risk.
FIRST TIME BUILD
Bootstrap Pre-Requisites And Fetch Sources
Follow the instructions for running bootstrap.sh to set up your environment - Building Redox or Podman Build.
Install Emulator Package
The aarch64 emulator is not installed by bootstrap.sh
. You can add it like this:
Pop!_OS/Ubuntu/Debian)
sudo apt-get install qemu-system-aarch64
Install Additional Tools To Build And Run ARM 64-bit Redox OS Image
sudo apt-get install u-boot-tools qemu-system-arm qemu-efi
Config Values
Before your first build, be sure to set the ARCH
variable in .config to your architecture type, in this case aarch64
. You can change several other configurable settings, such as the filesystem contents, etc. See Configuration Settings.
Add packages to the filesystem.
You can add programs to the filesystem by following the instructions here.
ADVANCED USERS
For more details on the build process, please read Advanced Build.
Compiling The Entire Redox Project
Now we have:
- fetched the sources
- set the
ARCH
toaarch64
- selected a filesystem config, e.g.
desktop
- tweaked the settings to our liking
- possibly added our very own source/binary package to the filesystem
We are ready to build the entire Redox Operating System Image.
Building an image for emulation
cd ~/tryredox/redox
time make all
will make the target, e.g. build/aarch64/desktop/hardrive.img
, which you can run with an emulator. See Running Redox.
Give it a while. Redox is big.
The main target, e.g. build/aarch64/desktop/harddrive.img
will do the following:
- fetch some sources for the core tools from the redox-os gitlab servers, then builds them; as it progressively cooks each package, it fetches the respective package's source and builds it
- creates a few empty files holding different parts of the final image filesystem
- using the newly built core tools, it builds the non-core packages into one of those filesystem parts
- fills the remaining filesystem parts appropriately with stuff built by the core tools to help boot Redox
- merges the the different filesystem parts into a final Redox Operating System image ready to run in Qemu.
Cleaning Previous Build Cycles
Cleaning Intended For Rebuilding Core Packages And Entire System
When you need to rebuild core-packages like relibc, gcc and related tools, clean the entire previous build cycle with:
cd ~/tryredox/redox/
rm -rf prefix/aarch64-unknown-redox/relibc-install/ cookbook/recipes/gcc/{build,sysroot,stage*} build/aarch64/*/{harddrive.img,livedisk.iso}
Cleaning Intended For Only Rebuilding Non-Core Package(s)
If you're only rebuilding a non-core package, you can partially clean the previous build cycle just enough to force the rebuilding of the Non-Core Package:
cd ~/tryredox/redox/
rm build/aarch64/*/{fetch.tag,harddrive.img}
Running Redox
To run Redox, do:
make qemu kvm=no vga=no
This should boot to Redox. The desktop GUI will be disabled, but you will be prompted to login to the Redox console.
Running The Redox Console With A Qemu Tap For Network Testing
Expose Redox to other computers within a LAN. Configure Qemu with a "TAP" which will allow other computers to test Redox client/server/networking capabilities.
Join the Redox chat if this is something you are interested in pursuing.
Note
If you encounter any bugs, errors, obstructions, or other annoying things, please report the issue. Thanks!
Troubleshooting the Build
In case you need to do some troubleshooting of the build process, this is a brief overview of the Redox toolchain, with some troubleshooting tips. This chapter is a work in progress.
Setting Up
bootstrap.sh
When you run bootstrap.sh
or podman_bootstrap.sh
, the Linux tools and libraries required to support the toolchain and build process are installed. Then the redox
project is cloned from the Redox GitLab. The redox
project does not contain the Redox sources, it mainly contains the build system. The cookbook
subproject, which contains recipes for all the packages to be included in Redox, is also copied as part of the clone.
Not all Linux distributions are supported by bootstrap.sh
, so if you are on an unsupported distribution, try podman_bootstrap.sh
for Podman builds, or have a look at podman_bootstrap.sh and try to complete the setup up manually.
If you want to support your distribution/OS without Podman, you can try to install the Debian/Ubuntu package equivalents for your distribution/OS from your package manager/software store, you can see them on this section of bootstrap.sh
.
The bootstrap.sh
script and redox-base-containerfile
covers the build system packages needed by the recipes on demo.toml
(Note that some distributions/OSes may have environment problems hard to fix, on these systems Podman will avoid some headaches)
git clone
If you did not use bootstrap.sh
or podman_bootstrap.sh
to set up your environment, you can get the sources with:
git clone https://gitlab.redox-os.org/redox-os/redox.git --origin upstream --recursive
If you are missing the cookbook
project or other components, ensure that you used the --recursive
flag when doing git clone
. Ensure that all the libraries and packages required by Redox are installed by running bootstrap.sh -d
or, if you will be using the Podman build, podman_bootstrap.sh -d
.
Building the System
When you run make all
, the following steps occur.
.config and mk/config.mk
make
scans .config and mk/config.mk for settings, such as the CPU architecture, config name, and whether to use Podman during the build process. Read through Configuration Settings to make sure you have the settings that are best for you.
Prefix
The Redox toolchain, referred to as Prefix because it is prefixed with the architecture name, is downloaded and/or built. Custom versions of cargo
, rustc
, gcc
and many other tools are created. They are placed in the prefix
directory.
If you have a problem with the toolchain, try rm -rf prefix
, and everything will be reinstalled the next time you run make all
.
Podman
If enabled, the Podman environment is set up. Podman is recommended for distros other than Pop!_OS/Ubuntu/Debian.
If your build appears to be missing libraries, have a look at Debugging your Podman Build Process.
If your Podman environment becomes broken, you can use podman system reset
and rm -rf build/podman
. In some cases, you may need to do sudo rm -rf build/podman
.
If you have others problems with Podman, read the Troubleshooting Podman chapter.
Filesystem Config
The list of Redox packages to be built is read from the filesystem config file, which is specified in .config or mk/config.mk
. If your package is not being included in the build, check that you have set CONFIG_NAME
or FILESYSTEM_CONFIG
, then check the config file.
Fetch
Each recipe source is downloaded using git
or tar
, according to the [source]
section of cookbook/recipes/recipe-name/recipe.toml
. Source is placed in cookbook/recipes/recipe-name/source
. Some recipes use the older recipe.sh
format instead.
If you are doing work on a recipe, you may want to comment out the [source]
section of the recipe. To discard your changes to the source for a recipe, or to update to the latest version, uncomment the [source]
section of the recipe, and use rm -rf source target
in the recipe directory to remove both the source and any compiled code.
After all recipes are fetched, a tag file is created as build/$ARCH/$CONFIG_NAME/fetch.tag
, e.g. build/x86_64/desktop/fetch.tag
. If this file is present, fetching is skipped. You can remove it manually, or use make rebuild
, if you want to force refetching.
Cookbook
Each recipe is built according to the recipe.toml
file. The compiled recipe is placed in the target
directory, in a subdirectory named based on the CPU architecture. These tasks are done by various Redox-specific shell scripts and commands, including repo.sh
, cook.sh
and Cargo
. These commands make assumptions about $PATH
and $PWD
, so they might not work if you are using them outside the build process.
If you have a problem with a package you are building, try rm -rf target
in the recipe directory. A common problem when building on non-Debian systems is that certain packages will fail to build due to missing libraries. Try using Podman Build.
After all packages are cooked, a tag file is created as build/$ARCH/$CONFIG_NAME/repo.tag
. If this file is present, cooking is skipped. You can remove it manually, or use make rebuild
, which will force refetching and rebuilding.
Create the Image with FUSE
To build the final Redox image, redox_installer
uses FUSE, creating a virtual filesystem and copying the packages into it. This is done outside of Podman, even if you are using Podman Build.
On some Linux systems, FUSE may not be permitted for some users, or bootstrap.sh
might not install it correctly. Investigate whether you can address your FUSE issues, or join the chat if you need advice.
Solving Compilation Problems
-
Check your Rust version (run
make env
andcargo --version
, thenexit
), make sure you have the latest version of Rust nightly!.- rustup.rs is recommended for managing Rust versions. If you already have it, run
rustup
.
- rustup.rs is recommended for managing Rust versions. If you already have it, run
-
Check if your
make
andnasm
are up-to-date. -
Run
make clean pull
to remove all your compiled binaries and update the sources. -
Sometimes there are merge requests that briefly break the build, so check on chat if anyone else is experiencing your problems.
-
Sometimes both the source and the binary of some recipe is wrong, thus remove the
source
andtarget
folders of the recipe and trigger a new build to know if it works.- Example:
make u.recipe-name c.recipe-name r.recipe-name
Update your branch
If you are doing local changes on the build system, probably you left your branch active on the folder (instead of master
branch).
New branches don't sync automatically with master
, thus if the master
branch receive new commits, you wouldn't use them because your branch is outdated.
To fix this, run:
git checkout master
git pull
git checkout your-branch
git merge master
Or
git checkout master
git pull
git merge your-branch master
If you want an anonymous merge, read this.
Update relibc
An outdated relibc copy can contain bugs (already fixed on recent versions) or outdated crates.
Update crates
Sometimes a Rust program use an old crate version without Redox support.
Verify the dependency tree
Some crates take a long time to do a new release (years in some cases), thus these releases will hold old versions of other crates, versions where the Redox support is not available (causing errors during the program compilation).
The redox_syscall
crate is the most affected by this, some crates hold a very old version of it and will require patches (cargo update -p
alone doesn't work).
To identify which crates are using old versions of Redox crates you will need to verify the dependency tree of the program, inside the program source directory, run:
cargo tree --target=x86_64-unknown-redox
This command will draw the dependency tree and you will need to find the crate name on the hierarchy.
If you don't want to find it, you can use a grep
pipe to see all crate versions used in the tree, sadly grep
don't preserve the tree hierarchy, thus it's only useful to see versions and if some patched crate works (if the patched crate works all crate matches will report the most recent version).
To do this, run:
cargo tree --target=x86_64-unknown-redox | grep crate-name
Debug Methods
- Use the following command for advanced logging:
make some-command 2>&1 | tee file-name.log
-
You can write to the
debug:
scheme, which will output on the console, but you must beroot
. This is useful if you are debugging an app where you need to use Orbital but still want to capture messages. -
Currently, the build system strips function names and other symbols from programs, as support for symbols is not implemented on Redox.
Kernel Panics in QEMU
If you receive a kernel panic in QEMU, capture a screenshot and send to us on Matrix or create an issue on GitLab.
Kill the Frozen QEMU Process
Run:
pkill qemu-system
Build System Quick Reference
The build system downloads/creates several files that you may want to know about. There are also several make
targets mentioned above, and a few extras that you may find useful. Here's a quick summary. All file paths are relative to your redox
base directory.
- Build System Organization
- Make Commands
- Environment Variables
- Scripts
- Component Separation
- Crates
- Pinned commits
- Git auto-checkout
- Update the build system
- Update relibc
- Configuration
- Cross-Compilation
- Build Phases
Build System Organization
Root Folder
Makefile
- The main makefile for the system, it loads all the other makefiles..config
- Where you change your build system settings. It is loaded by the Makefile. It is ignored bygit
.
Make Configuration
mk/config.mk
- The build system's own settings are here. You can override these settings in your.config
, don't change them here.mk/*.mk
- The rest of the makefiles. You should not need to change them.
Podman Configuration
podman/redox-base-containerfile
- The file used to create the image used by Podman Build. The installation of Ubuntu packages needed for the build is done here. See Adding Ubuntu Packages to the Build if you need to add additional Ubuntu packages.
Build System Configuration
config/your-cpu-arch/your-config.toml
- The build configuration with system settings, paths and recipes to be included in the QEMU image that will be built, e.g.config/x86_64/desktop.toml
.config/your-cpu-arch/server.toml
- Theserver
variant with system components only (try this config if you have boot problems on QEMU/real hardware).config/your-cpu-arch/desktop.toml
- The default build config with system components and the Orbital desktop environment.config/your-cpu-arch/demo.toml
- Thedemo
variant with optional programs and games.config/your-cpu-arch/ci.toml
- The continuous integration configuration, recipes added here become packages on CI server.config/your-cpu-arch/dev.toml
- The development variant with GCC and Rust included.config/your-cpu-arch/desktop-minimal.toml
- The minimaldesktop
variant for low-end computers.config/your-cpu-arch/server-minimal.toml
- The minimalserver
variant for low-end computers.config/your-cpu-arch/resist.toml
- The build with theresist
POSIX test suite.config/your-cpu-arch/acid.toml
- The build with theacid
stress test suite.config/your-cpu-arch/jeremy.toml
- The build of Jeremy Soller (creator/BDFL of Redox) with the recipes that he is testing in the moment.
Cookbook
-
cookbook/recipes/recipe-name
- A recipe (software port) directory (represented asrecipe-name
), this directory holds therecipe.toml
file. -
cookbook/recipes/recipe-name/recipe.toml
- The recipe configuration file, a recipe contains instructions for obtaining sources via tarball or git, then creating executables or other files to include in the Redox filesystem. Note that a recipe can contain dependencies that cause other recipes to be built, even if the dependencies are not otherwise part of your Redox build.To learn more about the recipe system read this page.
-
cookbook/recipes/recipe-name/recipe.sh
- The old recipe configuration format (can't be used as dependency of a recipe with a TOML configuration). -
cookbook/recipes/recipe-name/source.tar
- The tarball of the recipe (renamed). -
cookbook/recipes/recipe-name/source
- The directory where the recipe source is extracted/downloaded. -
cookbook/recipes/recipe-name/target
- The directory where the recipe binaries are stored. -
cookbook/recipes/recipe-name/target/${TARGET}
- The directory for the recipes binaries of the CPU architecture (${TARGET}
is the environment variable of the CPU). -
cookbook/recipes/recipe-name/target/${TARGET}/build
- The directory where the recipe build system run its commands. -
cookbook/recipes/recipe-name/target/${TARGET}/stage
- The directory where recipe binaries go before the packaging, aftermake all
ormake rebuild
the installer will extract the recipe package on the QEMU image, generally at/bin
or/lib
on Redox filesystem hierarchy. -
cookbook/recipes/recipe-name/target/${TARGET}/sysroot
- The folder where recipe build dependencies (libraries) goes, for example:library-name/src/example.c
-
cookbook/recipes/recipe-name/target/${TARGET}/stage.pkgar
- Redox package file. -
cookbook/recipes/recipe-name/target/${TARGET}/stage.sig
- Signature for thetar
package format. -
cookbook/recipes/recipe-name/target/${TARGET}/stage.tar.gz
- Legacytar
package format, produced for compatibility reasons as we are working to make the package manager use thepkgar
format. -
cookbook/recipes/recipe-name/target/${TARGET}/stage.toml
- Contains the runtime dependencies of the package and is part of both package formats. -
cookbook/*
- Part of the Cookbook system, these scripts and utilities help build the recipes. -
prefix/*
- Tools used by the cookbook system. They are normally downloaded during the first system build.If you are having a problem with the build system, you can remove the
prefix
directory and it will be recreated during the next build.
Build System Files
build
- The directory where the build system will place the final image. Usuallybuild/$(ARCH)/$(CONFIG_NAME)
, e.g.build/x86_64/desktop
.build/your-cpu-arch/your-config/harddrive.img
- The Redox image file, to be used by QEMU or VirtualBox for virtual machine execution on a Linux host.build/your-cpu-arch/your-config/livedisk.iso
- The Redox bootable image file, to be copied to a USB drive or CD for live boot and possible installation.build/your-cpu-arch/your-config/fetch.tag
- An empty file that, if present, tells the build system that fetching of recipe sources has been done.build/your-cpu-arch/your-config/repo.tag
- An empty file that, if present, tells the build system that all recipes required for the Redox image have been successfully built. The build system will not check for changes to your code when this file is present. Usemake rebuild
to force the build system to check for changes.build/podman
- The directory where Podman Build places the container user's home directory, including the container's Rust installation. Usemake container_clean
to remove it. In some situations, you may need to remove this directory manually, possibly with root privileges.build/container.tag
- An empty file, created during the first Podman Build, so Podman Build knows a reusable Podman image is available. Usemake container_clean
to force a rebuild of the Podman image on your nextmake rebuild
.
Make Commands
You can combine make
targets, but order is significant. For example, make r.games image
will build the games
recipe and create a new Redox image, but make image r.games
will make the Redox image before it builds the recipe.
Build System
make pull
- Update the sources for the build system without building.make all
- Builds the entire system, checking for changes and only building as required. Only use this for the first build. If the system was successfully built previously, this command may reportNothing to be done for 'all'
, even if some recipes have changed. Usemake rebuild
instead.
(You need to use this command if the Redox toolchain changed, after the make clean
command)
make rebuild
- Rebuild all recipes with changes (it don't detect changes on the Redox toolchain), including download changes from GitLab, it should be your normalmake
target.make prefix
- Download the Rust/GCC forks and build relibc (aftertouch relibc
).make fstools
- Build the image builder and RedoxFS (aftertouch installer
ortouch redoxfs
).make fetch
- Update recipe sources, according to each recipe, without building them. Only the recipes that are included in your(CONFIG_NAME).toml
are fetched. Does nothing if$(BUILD)/fetch.tag
is present. You won't need this.make clean
- Clean all recipe binaries (Note thatmake clean
may require some tools to be built).make unfetch
- Clean all recipe sources.make distclean
- Clean all recipe sources and binaries (a completemake clean
).make repo
- Package the recipe binaries, according to each recipe. Does nothing if$(BUILD)/repo.tag
is present. You won't need this.make live
- Creates a bootable image,build/livedisk.iso
. Recipes are not usually rebuilt.make env
- Creates a shell with the build environment initialized. If you are using Podman Build, the shell will be inside the container, and you can use it to debug build issues such as missing packages.make container_su
- After creating a container shell usingmake env
, and while that shell is still running, usemake container_su
to enter the same container asroot
. See Debugging your Build Process.make container_clean
- If you are using Podman Build, this will discard images and other files created by it.make container_touch
- If you have removed the filebuild/container.tag
, but the container image is still usable, this will recreate thecontainer.tag
file and avoid rebuilding the container image.make container_kill
- If you have started a build using Podman Build, and you want to stop it,Ctrl-C
may not be sufficient. Use this command to terminate the most recently created container.
Recipes
-
make f.recipe-name
- Download the recipe source. -
make r.recipe-name
- Build a single recipe, checking if the recipe source has changed, and creating the executable, etc. e.g.make r.games
.The package is built even if it is not in your filesystem configuration.
(This command will continue where you stopped the build process, it's useful to save time if you had a compilation error and patched a crate)
-
make c.recipe-name
- Clean the binary and intermediate build artifacts of the recipe. -
make u.recipe-name
- Clean the recipe source.
QEMU/VirtualBox
make qemu
- If abuild/harddrive.img
file exists, QEMU is run using that image. If you want to force a rebuild first, usemake rebuild qemu
. Sometimesmake qemu
will detect a change and rebuild, but this is not typical. If you are interested in a particular combination of QEMU command line options, have a look throughmk/qemu.mk
.make qemu vga=no
- Start QEMU without a GUI (also disable Orbital).make qemu vga=virtio
- Start QEMU with the VirtIO GPU driver (2D acceleration).make qemu kvm=no
- Start QEMU without the Linux KVM acceleration.make qemu iommu=no
- Start QEMU without the IOMMU.make qemu audio=no
- Disable all audio drivers.make qemu usb=no
- Disable all USB drivers.make qemu efi=yes
- Enable the UEFI boot loader (it supports more screen resolutions).make qemu live=yes
- Start a live disk (loads the entire image into RAM).make qemu vga=no kvm=no
- Cumulative QEMU options is supported.make image
- Builds a new QEMU image,build/harddrive.img
, without checking if any recipes have changed. Not recommended, but it can save you some time if you are just updating one recipe withmake r.recipe-name
.make gdb
- Connectsgdb
to the Redox image in QEMU. Join us on chat if you want to use this.make mount
- Mounts the Redox image as a filesystem at$(BUILD)/filesystem
. Do not use this if QEMU is running, and remember to usemake unmount
as soon as you are done. This is not recommended, but if you need to get a large file onto or off of your Redox image, this is available as a workaround.make unmount
- Unmounts the Redox image filesystem. Use this as soon as you are done withmake mount
, and do not start QEMU until this is done.make virtualbox
- The same asmake qemu
, but for VirtualBox.
Environment Variables
These variables are used by programs or commands.
$(BUILD)
- Represents thebuild
folder.$(ARCH)
- Represents the CPU architecture folder atbuild
.${TARGET}
- Represents the CPU architecture folder atcookbook/recipes/recipe-name/target
.$(CONFIG_NAME)
- Represents your Cookbook configuration folder atbuild/your-cpu-arch
.
We recommend that you use these variables with the "
symbol to clean any spaces on the path, spaces are interpreted as command separators and will break the path.
Example:
"${VARIABLE_NAME}"
If you have a folder inside the variable folder you can call it with:
"${VARIABLE_NAME}"/folder-name
Or
"${VARIABLE_NAME}/folder-name"
Scripts
You can use these scripts to perform actions not implemented as commands in the Cookbook build system.
scripts/changelog.sh
- Show the changelog of all Redox components/recipes.scripts/find-recipe.sh
- Show all files installed by a recipe.
scripts/find-recipe.sh recipe-name
scripts/rebuild-recipe.sh
- Alternative tomake u.recipe r.recipe c.recipe u.recipe
that clean your recipe source/binary (deletesource
,source.tar
andtarget
in the recipe folder) to make a new build.
scripts/rebuild-recipe.sh recipe-name
Component Separation
relibc
- The cross-compiled recipes will link to the relibc of this folder (submodule)redoxfs
- The FUSE driver of RedoxFS (submodule, to run on Linux)cookbook/recipes/relibc
- The relibc package to be installed inside of Redox for static or dynamic linking (recipe, for native compilation)cookbook/recipes/redoxfs
- The RedoxFS user-space daemon that run inside of Redox (recipe)
Crates
Some Redox projects have crates on crates.io
, thus they use a version-based development, if some change is sent to their repository they need to release a new version on crates.io
, it will have some delay.
Current projects with crates
- libredox
- redox_syscall
- redoxfs
- redoxer
- redox_installer
- redox-users
- redox-buffer-pool
- redox_log
- redox_termios
- redox-daemon
- redox_event
- redox_event_update
- redox_pkgutils
- redox_uefi
- redox_uefi_alloc
- redox_dmi
- redox_hwio
- redox_intelflash
- redox_liner
- redox_simple_endian
- redox_uefi_std
- ralloc
- orbclient
- orbclient_window_shortcuts
- orbfont
- orbimage
- orbterm
- orbutils
- slint_orbclient
- ralloc_shim
- ransid
- gitrepoman
- pkgar
- pkgar-core
- pkgar-repo
- termion
- reagent
- gdb-protocol
- orbtk
- orbtk_orbclient
- orbtk-render
- orbtk-shell
- orbtk-tinyskia
Manual patching
If you don't want to wait a new release on crates.io
, you can patch the crate temporarily by fetching the version you need from GitLab and changing the crate version in Cargo.toml
to crate-name = { path = "path/to/crate" }
.
Pinned commits
The build system pin the last working commit of the submodules, if some submodule is broken because of some commit, the pinned commit avoid the fetch of this broken commit, thus pinned commits increase the development stability (broken changes aren't passed for developers/testers).
(When you run make pull
the build system update the submodule folders based on the last pinned commit)
Current pinned submodules
cookbook
installer
redoxfs
relibc
rust
Manual submodule update
Whenever a fix or new feature is merged on the submodules, the upstream build system must update the commit hash, to workaround this you can run git pull
on the folder of the submodule directly, example:
make pull
cd submodule-folder-name
git checkout master
git pull
cd ..
Git auto-checkout
The make rebuild
and make r.recipe
commands will Git checkout (change the active branch) of the recipe source to master
(only recipes that fetch Git repositories are affected, thus all Redox components).
If you are working in a separated branch on the recipe source you can't build your changes, to avoid this comment out the [source]
and git =
fields from your recipe.toml
:
#[source]
#git = "some-repository-link"
Submodules
The make pull
and git submodule update
commands will Git checkout the submodules active branches to master
and pin a commit in HEAD
, if you are working on the build system submodules don't run this command, to keep the build system using your changes.
Update the build system
This is the recommended way to update your sources/binaries.
make pull rebuild
(If the make pull
command download new commits of the relibc
submodule, you will need to run the commands of the section below)
Some new changes will require a complete rebuild (you will need to read the Dev room in our chat to know if some heavy MR was merged and run the make clean all
command) or a new build system copy (run the bootstrap.sh script again or run the commands of this section), but the commands above cover the most cases.
Update relibc
An outdated relibc copy can contain bugs (already fixed on recent commits) or outdated crates, to update the relibc sources and build it, run:
make pull
touch relibc
make prefix
All recipes
To pass the new relibc changes for all recipes (system components are the most common use case) you will need to rebuild all recipes, unfortunately it's not possible to use make rebuild
because it can't detect the relibc changes to trigger a complete rebuild.
To clean all recipe binaries and trigger a complete rebuild, run:
make clean all
One recipe
To pass the new relibc changes to one recipe, run:
make c.recipe-name r.recipe-name
Update relibc crates
Sometimes you need to update the relibc crates, run these commands between the make pull
and touch relibc
commands:
cd relibc
cargo update -p crate-name
cd ..
Or (to update all crates, may break the ABI)
cd relibc
cargo update
cd ..
Configuration
Cross-Compilation
The Redox build system is an example of cross-compilation. The Redox toolchain runs on Linux, and produces Redox executables. Anything that is installed with your package manager is just part of the toolchain and does not go on Redox.
As the recipes are statically linked, Redox doesn't have packages with shared libraries (lib*) seen in most Unix/Linux packages.
In the background, make all
downloads the Redox toolchain to build all recipes (patched forks of rustc, GCC and LLVM).
If you are using Podman, the podman_bootstrap.sh
script will download an Ubuntu container and make all
will install the Redox toolchain, all recipes will be compiled in the container.
The recipes produce Redox-specific executables. At the end of the build process, these executables are installed inside the QEMU image.
The relibc
(Redox C Library) provides the Redox system calls to any software.
Build Phases
Every build system command/script has phases, read this page to know them.
Build Phases
Every build system command/script has phases, this page will document them.
- bootstrap.sh
- podman_bootstrap.sh
- make all (first run)
- make all (second run)
- make all (Podman, first run)
- make prefix
- make rebuild
- make r.recipe
- make qemu
bootstrap.sh
This is the script used to do the initial configuration of the build system, see its phases below:
- Install the Rust toolchain (using rustup.rs) and add
cargo
to the shell PATH. - Install all necessary packages (based on your Unix-like distribution) of the development tools to build all recipes.
- Download the build system source/submodules (if you run without the
-d
option -./bootstrap.sh -d
) - Show a message with the commands to build the Redox system.
podman_bootstrap.sh
This script is the alternative to bootstrap.sh
for non-Debian systems, used to configure the build system for use with Podman., see its phases below:
- Install Podman, make, FUSE and QEMU if it's not installed.
- Download the build system sources (if you run without the
-d
option -./podman_bootstrap.sh -d
) - Show a message with the commands to build the Redox system.
make all
(first run)
This is the command used to build all recipes inside your default TOML configuration (config/$ARCH/desktop.toml
or the one inside your .config
), see its phases below:
- Download the binaries of the Redox toolchain (patched rustc, GCC and LLVM) from the CI server.
- Download the sources of the recipes specified on your TOML configuration.
- Build the recipes.
- Package the recipes.
- Create the QEMU image and install the packages.
make all
(second run)
If the build/$ARCH/$CONFIG/repo.tag
file is up to date, it won't do anything. If the repo.tag
file is missing it will works like make rebuild
.
make all
(Podman, first run)
This command on Podman works in a different way, see its phases below:
- Install the Redox container (Ubuntu + Redox toolchain).
- Install the Rust toolchain inside this container.
- Install the Ubuntu packages (inside the container) of the development tools to build all recipes.
- Download the sources of the recipes specified on your TOML configuration.
- Build the recipes.
- Package the recipes.
- Create the QEMU image and install the packages.
make prefix
This command is used to download the build system toolchain, see its phases below:
- Download the Rust and GCC forks from the CI server (if it's not present or you if you executed
rm -rf prefix
to fix issues). - Build the
relibc
submodule.
make rebuild
This is the command used to check/build the recipes with changes, see its phases below:
- Check for source changes on recipes (if confirmed, download them) or if a new recipe was added to the TOML configuration.
- Build the recipes with changes.
- Package the recipes with changes.
- Create the QEMU image and install the packages.
make r.recipe
This is the command used to build a recipe, see its phases below.
- Search where the recipe is stored.
- See if the
source
folder is present, if not, download the source from the method specified inside therecipe.toml
(this step will be ignored if the[source]
andgit =
fields are commented out of therecipe.toml
). - Build the recipe dependencies as static objects (for static linking).
- Start the compilation based on the template of the
recipe.toml
. - If the recipe is using Cargo, it will download the crates, build them and link on the final binary of the program.
- If the recipe is using GNU Autotools, CMake or Meson, they will check the build environment and library sources to link on the final binary of the program.
- Package the recipe.
Typically, make r.recipe
is used with make image
to quickly build a recipe and create an image to test it.
make qemu
This is the command used to run Redox inside a virtual machine, see its phases below:
- It checks for pending changes, if found, it will trigger
make rebuild
. - It checks the existence of the QEMU image, if not available, it will works like
make image
. - A command with custom arguments is passed to QEMU to boot Redox without problems.
- The QEMU window is shown with a menu to choose the resolution.
- The boot process happens (the bootloader does a bootstrap to the kernel and the init start the userspace daemons).
- The Orbital login screen appear.
Developing for Redox
Redox does not yet have a complete set of development tools that run natively. Currently, you must do your development on Linux, then include or copy your application to your Redox filesystem. This chapter outlines some of the things you can do as a developer.
(Before reading this chapter you must read the Understanding Cross-Compilation for Redox and Build System Quick Reference pages)
Including Programs in Redox
(Before reading this page you must read the Build System Quick Reference page)
- Existing package
- Using a Script
- Modifying an Existing Package
- Create your own - Hello World
- Running your program
The Cookbook system makes the packaging process very simple. First, we will show how to add an existing program for inclusion. Then we will show how to create a new program to be included. In Coding and Building, we discuss the development cycle in more detail.
Existing Package
Redox has many frequently used packages and programs that are available for inclusion. Each package has a recipe in the directory cookbook/recipes/packagename
. Adding an existing package to your build is as simple as adding it to config/$ARCH/myfiles.toml
, or whatever name you choose for your .toml
configuration definition. Here we will add the games
package, which contains several low-def games.
Set up the Redox Build Environment
- Follow the steps in Building Redox or Podman Build to create the Redox Build Environment on your host computer.
- Check that
CONFIG_NAME
inmk/config.mk
isdesktop
. - Build the system as described. This will take quite a while the first time.
- Run the system in QEMU.
cd ~/tryredox/redox
make qemu
Assuming you built the default configuration desktop
for x86_64
, none of the Redox games (e.g. /bin/minesweeper
) have been included yet.
- On your Redox emulation, log into the system as user
user
with an empty password. - Open a
Terminal
window by clicking on the icon in the toolbar at the bottom of the Redox screen, and typels /bin
. You will see thatminesweeper
is not listed. - Type
Ctrl-Alt-G
to regain control of your cursor, and click the upper right corner of the Redox window to exit QEMU.
Set up your Configuration
Read through Configuration Settings. Then do the following.
- From your
redox
base directory, copy an existing configuration, then edit it.
cd ~/tryredox/redox
cp config/x86_64/desktop.toml config/x86_64/myfiles.toml
nano config/x86_64/myfiles.toml
-
Look for the
[packages]
secion and add the package to the configuration. You can add the package anywhere in the[packages]
section, but by convention, we add them to the end or to an existing related area of the section.... [packages] ... uutils = {} # Add this line: games = {} ...
-
Change your
CONFIG_NAME
in .config to refer to yourmyfiles.toml
configuration definition.nano .config
# Add this line: CONFIG_NAME?=myfiles
-
Save all your changes and exit the editor.
Build the System
- In your base
redox
folder, e.g.~/tryredox/redox
, build the system and run it in QEMU.
cd ~/tryredox/redox
make all
make qemu
Or
cd ~/tryredox/redox
make all qemu
- On your Redox emulation, log into the system as user
user
with an empty password. - Open a
Terminal
window by clicking it on the icon in the toolbar at the bottom of the Redox screen, and typels /bin
. You will see thatminesweeper
is listed. - In the terminal window, type
minesweeper
. Play the game using the arrow keys orWSAD
,space
to reveal a spot,f
to flag a spot when you suspect a mine is present. When you typef
, anF
character will appear.
If you had a problem, use this command to log any possible errors on your terminal output:
make r.recipe-name 2>&1 | tee recipe-name.log
And that's it! Sort of.
Dependencies
The majority of Rust programs use crates without C/C++ dependencies (Build Instructions without Linux distribution packages), on these cases you just need to port the necessary crates (if they give errors) or implement missing stuff on relibc
(you will need to update the Rust libc
crate).
If the "Build Instructions" of the Rust program have Linux distribution packages to install, it's a mixed Rust/C/C++ program, read Dependencies to port these programs.
Update crates
In some cases the Cargo.lock
of some Rust program can have a version of some crate that don't have Redox patches (old) or broken Redox support (changes on code that make the target OS fail), this will give you an error during the recipe compilation.
Using a Script
The "script" template type executes shell commands. However, in order to keep scripts small, a lot of the script definition is done for you. Pre-script goes before your script
content, and Post-script goes after.
Pre-script
# Add cookbook bins to path
export PATH="${COOKBOOK_ROOT}/bin:${PATH}"
# This puts cargo build artifacts in the build directory
export CARGO_TARGET_DIR="${COOKBOOK_BUILD}/target"
# This adds the sysroot includes for most C compilation
#TODO: check paths for spaces!
export CFLAGS="-I${COOKBOOK_SYSROOT}/include"
export CPPFLAGS="-I${COOKBOOK_SYSROOT}/include"
# This adds the sysroot libraries and compiles binaries statically for most C compilation
#TODO: check paths for spaces!
export LDFLAGS="-L${COOKBOOK_SYSROOT}/lib --static"
# These ensure that pkg-config gets the right flags from the sysroot
export PKG_CONFIG_ALLOW_CROSS=1
export PKG_CONFIG_PATH=
export PKG_CONFIG_LIBDIR="${COOKBOOK_SYSROOT}/lib/pkgconfig"
export PKG_CONFIG_SYSROOT_DIR="${COOKBOOK_SYSROOT}"
# cargo template
COOKBOOK_CARGO="${COOKBOOK_REDOXER}"
COOKBOOK_CARGO_FLAGS=(
--path "${COOKBOOK_SOURCE}"
--root "${COOKBOOK_STAGE}"
--locked
--no-track
)
function cookbook_cargo {
"${COOKBOOK_CARGO}" install "${COOKBOOK_CARGO_FLAGS[@]}"
}
# configure template
COOKBOOK_CONFIGURE="${COOKBOOK_SOURCE}/configure"
COOKBOOK_CONFIGURE_FLAGS=(
--host="${TARGET}"
--prefix=""
--disable-shared
--enable-static
)
COOKBOOK_MAKE="make"
COOKBOOK_MAKE_JOBS="$(nproc)"
function cookbook_configure {
"${COOKBOOK_CONFIGURE}" "${COOKBOOK_CONFIGURE_FLAGS[@]}"
"${COOKBOOK_MAKE}" -j "${COOKBOOK_MAKE_JOBS}"
"${COOKBOOK_MAKE}" install DESTDIR="${COOKBOOK_STAGE}"
}
Post-script
# Strip binaries
if [ -d "${COOKBOOK_STAGE}/bin" ]
then
find "${COOKBOOK_STAGE}/bin" -type f -exec "${TARGET}-strip" -v {} ';'
fi
# Remove libtool files
if [ -d "${COOKBOOK_STAGE}/lib" ]
then
find "${COOKBOOK_STAGE}/lib" -type f -name '*.la' -exec rm -fv {} ';'
fi
# Remove cargo install files
for file in .crates.toml .crates2.json
do
if [ -f "${COOKBOOK_STAGE}/${file}" ]
then
rm -v "${COOKBOOK_STAGE}/${file}"
fi
done
Modifying an Existing Package
If you want to make changes to an existing Redox package for your own purposes, you can do your work in the directory cookbook/recipes/PACKAGE/source
. The cookbook process will not fetch sources if they are already present in that folder. However, if you intend to do significant work or to contribute changes to Redox, please follow Coding and Building.
Create your own - Hello World
To create your own program to be included, you will need to create the recipe. This example walks through adding the "hello world"
program that cargo new
automatically generates to a local build of the operating system.
This process is largely the same for other Rust crates and even non-Rust programs.
Setting up the recipe
The cookbook will only build programs that have a recipe defined in
cookbook/recipes
. To create a recipe for Hello World, first create a
directory cookbook/recipes/helloworld
. Inside this directory create a file
recipe.toml
and add these lines to it:
[build]
template = "cargo"
The [build]
section defines how cookbook should build our project. There are
several templates but "cargo"
should be used for Rust projects.
The [source]
section of the recipe tells Cookbook how fetch the sources for a program from a git or tarball URL.
This is done if cookbook/recipes/PACKAGE/source
does not exist, during make fetch
or during the fetch step of make all
. For this example, we will simply develop in the source
directory, so no [source]
section is necessary.
Writing the program
Since this is a Hello World example, we are going to have Cargo write the code for us. In cookbook/recipes/helloworld
, do the following:
mkdir source
cd source
cargo init --name="helloworld"
This creates a Cargo.toml
file and a src
directory with the Hello World program.
Adding the program to the Redox build
To be able to access a program from within Redox, it must be added to the
filesystem. As above, create a filesystem config config/x86_64/myfiles.toml
or similar by copying an existing configuration, and modify CONFIG_NAME
in .config to be myfiles
. Open config/x86_64/myfiles.toml
and add helloworld = {}
to the [packages]
section.
During the creation of the Redox image, the build system installs those packages on the image filesystem.
[packages]
userutils = {}
...
# Add this line:
helloworld = {}
Then, to build the Redox image, including your program, go to your redox
base directory and run make rebuild
.
cd ~/tryredox/redox
make rebuild
Running your program
Once the rebuild is finished, run make qemu
, and when the GUI starts, log in to Redox, open the terminal, and run helloworld
. It should print
Hello, world!
Note that the helloworld
binary can be found in /bin
on Redox (ls /bin
).
Coding and Building
(Before reading this page you must read the Build System Quick Reference page)
- Working with Git
- Using Multiple Windows
- Set up your Configuration
- The Recipe
- Git Clone
- Edit your Code
- Check your Code on Linux
- The Full Rebuild Cycle
- Test Your Changes
- Update crates
- Search Text On Files
- Checking In your Changes
- Shortening the Rebuild Cycle
- Working with an unpublished version of a crate
- A Note about Drivers
- Development Tips
- VS Code Tips and Tricks
Working with Git
Before starting the development, read through Creating Proper Pull Requests, which describes how the Redox team uses Git.
In this example, we will discuss creating a fork of the games
package, pretending you are going to create a Merge Request
for your changes. Don't actually do this. Only create a fork when you have changes that you want to send to Redox upstream.
Anonymous commits
If you are new to Git, it request your username and email before the first commit on some offline repository, if you don't want to insert your personal information, run:
- Repository
cd your-repository-folder
git config user.name 'Anonymous'
git config user.email '<>'
This command will make you anonymous only on this repository.
- Global
git config --global user.name 'Anonymous'
git config --global user.email '<>'
This command will make you anonymous in all repositories of your user.
Using Multiple Windows
For clarity and ease of use, we will be using a couple of Terminal
windows on your host system, each running a different bash shell instance.
- The
Build
shell, normally at~/tryredox/redox
or whatever your baseredox
directory is. - The
Coding
shell, normally at~/tryredox/redox/cookbook/recipes/games/source
.
Set up your Configuration
To get started, follow the steps in Including a Program in a Redox Build to include the games
package in your myfiles
configuration file. In your Terminal
window, go to your redox
base directory and run:
make qemu
On Redox, run minesweeper
as described in the link above. Type the letter f
and you will see F
appear on your screen. Use Ctrl-Alt-G
to regain control of your cursor, and click the upper right corner to exit QEMU.
Keep the Terminal
window open. That will be your Build
shell.
The Recipe
Let's walk through contributing to the Redox subpackage games
, which is a collection of low-def games. We are going to modify minesweeper
to display P instead of F on flagged spots.
The games
package is built in the folder cookbook/recipes/games
. When you clone
the redox
base package, it includes a file cookbook/recipes/games/recipe.toml
. The recipe tells the build system how to get the source and how to build it.
When you build the system and include the games
package, the toolchain does a git clone
into the directory cookbook/recipes/games/source
. Then it builds the package in the directory cookbook/recipes/games/target
.
Edit the recipe so it does not try to automatically clone the sources.
- Create a
Terminal
window runningbash
on your host system, which we will call yourCoding
shell. - Change to the
games
directory. - Open
recipe.toml
in an editor.
cd ~/tryredox/redox/cookbook/recipes/games
nano recipe.toml
- Comment out the
[source]
section at the top of the file.
# [source]
# git = "https://gitlab.redox-os.org/redox-os/games.git"
- Save your changes.
Git Clone
To set up this package for contributing, do the following in your Coding
shell.
- Delete the source and target directories in
cookbook/recipes/games
. - Clone the package into the
source
directory, either specifying it in thegit clone
or by moving it afterclone
.
rm -rf source target
git clone https://gitlab.redox-os.org/redox-os/games.git --origin upstream --recursive
mv games source
-
If you are making a change that you want to contribute, (you are not, don't actually do this) at this point you should follow the instructions in Creating Proper Pull Requests, replacing
redox.git
withgames.git
. Make sure you fork the correct repository, in this case redox-os/games. Remember to create a new branch before you make any changes. -
If you want to Git Clone a remote repoitory (main repoitory/your fork), you can add these sections on your
recipe.toml
:
[source]
git = your_git_link
branch = your_branch (optional)
Edit your Code
- Using your favorite code editor, make your changes. We use
gedit
in this example, from yourCoding
shell. You can also use VS Code.
cd source
nano src/minesweeper/main.rs
- Search for the line containing the definition of the
FLAGGED
constant (around line 36), and change it toP
.
#![allow(unused)] fn main() { const FLAGGED: &'static str = "P"; }
Check your Code on Linux
Most Redox applications are source-compatible with Linux without being modified. You can (and should) build and test your program on Linux.
- From within the
Coding
shell, go to thesource
directory and use the Linux version ofcargo
to check for errors.
cargo check
(Since much of the code in games
is older (pre-2018 Rust), you will get several warnings. They can be ignored)
You could also use cargo clippy
, but minesweeper
is not clean enough to pass.
- The
games
package creates more than one executable, so to testminesweeper
on Linux, you need to specify it tocargo
. In thesource
directory, do:
cargo run --bin minesweeper
The Full Rebuild Cycle
After making changes to your package, you should make rebuild
, which will check for any changes to packages and make a new Redox image. make all
and make qemu
do not check for packages that need to be rebuilt, so if you use them, your changes may not be included in the system. Once you are comfortable with this process, you can try some tricks to save time.
- Within your
Build
shell, in yourredox
directory, do:
tee build.log
make rebuild
exit
The script command starts a new shell and logs all the output from the make
command.
The exit
command is to exit from script
. Remember to exit the script
shell to ensure all log messages are written to build.log
. There's also a trick to flush the log.
- You can now scan through
build.log
to check for errors. The file is large and contains many ANSI Escape Sequences, so it can be hard to read. However, if you encountered a fatal build error, it will be at the end of the log, so skip to the bottom and scan upwards.
Test Your Changes
In the Redox instance started by make qemu
, test your changes to minesweeper
.
- Log in with user
user
and no password. - Open a
Terminal
window. - Type
minesweeper
. - Use your arrow keys or
WSAD
to move to a square and typef
to set a flag. The characterP
will appear.
Congratulations! You have modified a program and built the system! Next, create a bootable Redox with your change.
- If you are still running QEMU, type
Ctrl-Alt-G
and click the upper right corner of the Redox window to exit. - In your
Build
shell, in theredox
directory, do:
make live
In the directory build/x86_64/myfiles
, you will find the file livedisk.iso
. Follow the instructions for Running on Real Hardware and test out your change.
Test Your Changes (out of the Redox build system)
Redoxer is the tool used to build/run Rust programs (and C/C++ programs with zero dependencies) for Redox, it download the Redox toolchain and make Cargo use it.
Commands
- Install
redoxer
tool
cargo install redoxer
- Install
redoxer
toolchain
redoxer toolchain
- Build project with
redoxer
redoxer build
- Run project with
redoxer
redoxer run
- Test project with
redoxer
redoxer test
- Run arbitrary executable (
echo hello
) withredoxer
redoxer exec echo hello
Testing On Real Hardware
You can use the make live
command to create bootable images, it will be used instead of make image
.
This command will create the file build/your-cpu-arch/your-config/livedisk.iso
, you will burn this image on your USB drive, CD or DVD disks (if you have an USB drive, Popsicle is a simple program to flash your device).
Full bootable image creation
- Update your recipes and create a bootable image:
make rebuild live
Partial bootable image creation
- Build your source changes on some recipe and create a bootable image (no QEMU image creation):
make c.recipe-name r.recipe-name live
- Manually update multiple recipes and create a bootable image (more quick than
make rebuild
):
make r.recipe1 r.recipe2 live
Flash the bootable image on your USB device
If you don't have a GUI/display server to run Popsicle, you can use the Unix tool dd
, follow the steps below:
- Go to the files of your Cookbook configuration:
cd build/your-cpu-arch/your-config
- Flash your device with
dd
First you need to find your USB device ID, use this command to show the IDs of all connected disks on your computer:
ls /dev/disk/by-id
Search for the items beginning with usb
and find your USB device model, you will copy and paste this ID on the dd
command below (don't use the IDs with part-x
in the end).
sudo dd if=livedisk.iso of=/dev/disk/by-id/usb-your-device-model oflag=sync bs=4M status=progress
In the /dev/disk/by-id/usb-your-device-model
path you will replace the usb-your-device-model
part with your USB device ID obtained before.
Double-check the "of=/dev/disk/by-id/usb-your-device-model" part to avoid data loss
Burn your CD/DVD with the bootable image
- Go to the files of your Cookbook configuration:
cd build/your-cpu-arch/your-config
- Verify if your optical disk device can write on CD/DVD
cat /proc/sys/dev/cdrom/info
Check if the items "Can write" has 1
(Yes) or 0
(No), it also show the optical disk devices on the computer: /dev/srX
.
- Burn the disk with xorriso
xorriso -as cdrecord -v -sao dev=/dev/srX livedisk.iso
In the /dev/srX
part, the x
letter is your optical device number.
Update crates
Search Text On Files
To find which package contains a particular command, crate or function call, you can use the grep
command.
This will speed up your development workflow.
- Command examples
grep -rnw "redox-syscall" --include "Cargo.toml"
This command will find any "Cargo.toml" file that contains the phrase "redox-syscall". Helpful for finding which package contains a command or uses a crate.
grep -rni "physmap" --include "*.rs"
This command will find any ".rs" file that contains the string "physmap". Helpful for finding where a function is used or defined.
Options context:
-
-n
- display the line number of the specified text on each file. -
-r
- Search directories recursively. -
-w
- Match only whole words. -
-i
- Ignore case distinctions in patterns and data. -
grep explanation - GeeksforGeeks article explaining the
grep
program.
Checking In your Changes
Don't do this now, but if you were to have permanent changes to contribute to a package, at this point, you would git push
and create a Merge Request, as described in Creating Proper Pull Requests.
If you were contributing a new package, such as porting a Rust application to Redox, you would need to check in the recipe.toml
file. It goes in the cookbook
subproject. You may also need to modify a filesystem config file, such as config/demo.toml
. It goes in the redox
project. You must fork and do a proper Pull Request for each of these projects. Please coordinate with the Redox team via Chat before doing this.
Shortening the Rebuild Cycle
To skip some of the steps in a full rebuild
, here are some tricks.
Build your Package for Redox
You can build just the games
package, rather than having make rebuild
check every package for changes. This can help shorten the build cycle if you are trying to resolve issues such as compilation errors or linking to libraries.
- In your
Build
shell, in theredox
directory, type:
make r.games
Redox's makefiles have a rule for r.PACKAGE
, where PACKAGE
is the name of a Redox package. It will make that package, ready to load into the Redox filesystem.
Once your Redox package has been successfully built, you can use make rebuild
to create the image, or, if you are confident you have made all packages successfully, you can skip a complete rebuild and just make a new image.
If you had a problem, use this command to log any possible errors on your terminal output:
make c.recipe-name r.recipe-name 2>&1 | tee recipe-name.log
Make a New QEMU Image
Now that all the packages are built, you can make a Redox image without the step of checking for modifications.
- In your
Build
shell, in theredox
directory, do:
make image qemu
make image
skips building any packages (assuming the last full make succeeded), but it ensures a new image is created, which should include the package you built in the previous step.
Most Quick Trick To Test Changes
Run:
make c.recipe-name r.recipe-name image qemu
This command will build just your modified recipe, then update your QEMU image with your modified recipe and run QEMU with a GUI.
Insert Text Files On QEMU (quickest method)
If you need to move text files, such as shell scripts or command output, from or to your Redox instance running on QEMU, use your Terminal window that you used to start QEMU. To capture the output of a Redox command, run script
before starting QEMU.
tee qemu.log
make qemu
redox login: user
# execute your commands, with output to the terminal
# exit QEMU
# exit the shell started by script
exit
The command output will now be in the file qemu.log. Note that if you did not exit the script
shell, the output may not be complete.
To transfer a text file, such as a shell script, onto Redox, use the Terminal window with copy/paste.
redox login: user
cat > myscript.sh << EOF
# Copy the text to the clipboard and use the Terminal window paste
EOF
If your file is large, or non-ascii, or you have many files to copy, you can use the process described in Patch an Image. However, you do so at your own risk.
Files you create while running QEMU remain in the Redox image, so long as you do not rebuild the image. Similarly, files you add to the image will be present when you run QEMU, so long as you do not rebuild the image.
Make sure you are not running QEMU. Run make mount
. You can now use your file browser to navigate to build/x86_64/myfiles/filesystem
. Copy your files into or out of the Redox filesystem as required. Make sure to exit your file browser window, and use make unmount
before running make qemu
.
Note that in some circumstances, make qemu
may trigger a rebuild (e.g. make
detects an out of date file). If that happens, the files you copied into the Redox image will be lost.
Insert files on the QEMU image using a recipe
You can use a Redox package to put your files inside of the Redox filesystem, on this example we will use the recipe myfiles
for this:
- Create the
source
folder inside themyfiles
recipe directory and move your files to it:
mkdir cookbook/recipes/other/myfiles/source
- Add the
myfiles
recipe below the[packages]
section on your Cookbook configuration atconfig/your-cpu-arch/your-config.toml
:
[packages]
...
myfiles = {}
...
- Build the recipe and create a new QEMU image:
make r.myfiles image
- Open QEMU to verify your files:
make qemu
This recipe will make the Cookbook package all the files on the source
folder to be installed on the /home/user
directory on your Redox filesystem.
This is the only way keep your files after the make image
command.
Insert Files On The QEMU Image
If you feel the need to skip creating a new image, and you want to directly add a file to the existing Redox image, it is possible to do so. However, this is not recommended. You should use a recipe to make the process repeatable. But here is how to access the Redox image as if it were a Linux filesystem.
-
NOTE: You must ensure that Redox is not running in QEMU when you do this.
-
In your
Build
shell, in theredox
directory, type:
make mount
The Redox image is now mounted as a directory at build/x86_64/myfiles/filesystem
.
- Remove the old
minesweeper
and replace it with your new version. In theBuild
shell.
cd ~/tryredox/redox/build/x86_64/myfiles/filesystem
rm ./bin/minesweeper
cp ~/tryredox/redox/cookbook/recipes/games/target/x86_64-unknown-redox/stage/bin/minesweeper ./bin
- Unmount the filesystem and test your image. NOTE: You must unmount before you start QEMU.
cd ~/tryredox/redox
make unmount
make qemu
The new version of minesweeper
is now in your Redox filesystem.
Working with an unpublished version of a crate
Some recipes use Cargo dependencies (Cargo.toml) with recipe dependencies (recipe.toml), if you are making a change to one of these Cargo dependencies, your merged changes will take a while to appear on crates.io as we publish to there instead of using our GitLab fork.
To test your changes quickly, follow these tutorials on Cargo documentation:
A Note about Drivers
Drivers are a special case for rebuild. The source for drivers is fetched both for the drivers
recipe and the drivers-initfs
recipe. The initfs
recipe also copies some drivers from drivers-initfs
during the build process. If your driver is included in initfs
, you need to keep all three in sync. The easiest solution is to write a build shell script something like the following, which should be run in your redox
base directory. (Note: This assumes your driver code edits are in the directory cookbook/recipes/drivers
. Don't accidentally remove your edited code.)
rm -rf cookbook/recipes/drivers-initfs/{source,target} cookbook/recipes/initfs/target
cp -R cookbook/recipes/drivers/source cookbook/recipes/drivers-initfs
make rebuild
make qemu
Development Tips
- Make sure your build system is up-to-date, read this section in case of doubt.
- If some program can't build or work properly, remember that something could be missing/hiding on relibc, some function or bug.
- If you have some error on QEMU, remember to test different settings or check your operating system (Debian, Ubuntu and Pop OS! are the recommend Linux distributions to do testing/development for Redox).
- Remember to log all errors, you can use this command as example:
your-command 2>&1 | tee file-name.log
- If you have a problem that seems to not have a solution, think on simple/stupid things, sometimes you are very confident on your method and forget obvious things (it's very common).
- If you want a more quick review of your Merge Request, make it small, Jeremy will read it more fast.
- If your big Merge Request is taking too long to merge try to shrink it with other small MRs, make sure it don't break anything, if this method break your changes, don't shrink.
VS Code Tips and Tricks
Although not for every Rustacean, VS Code is helpful for those who are working with unfamiliar code. We don't get the benefit of all its features, but the Rust support in VS Code is very good.
If you have not used VS Code with Rust, here's an overview. VS Code installation instructions are here.
After installing the rust-analyzer
extension as described in the overview, you get access to several useful features.
- Inferred types and parameter names as inline hints.
- Peeking at definitions and references.
- Refactoring support.
- Autoformat and clippy on Save (optional).
- Visual Debugger (if your code can run on Linux).
- Compare/Revert against the repository with the Git extension.
Using VS Code on individual packages works pretty well, although it sometimes take a couple of minutes to kick in. Here are some things to know.
Start in the "source" folder
In your Coding
shell, start VS Code, specifying the source
directory.
code ~/tryredox/redox/cookbook/recipes/games/source
Or if you are in the source
directory, just code .
with the period meaning the source
dir.
Add it to your "Favorites" bar
VS Code remembers the last project you used it on, so typing code
with no directory or starting it from your Applications window or Favorites bar will go back to that project.
After starting VS Code, right click on the icon and select Add to Favorites
.
Wait a Couple of Minutes
You can start working right away, but after a minute or two, you will notice extra text appear in your Rust code with inferred types and parameter names filled in. This additional text is just hints, and is not permanently added to your code.
Save Often
If you have made significant changes, rust-analyzer
can get confused, but this can usually be fixed by doing Save All
.
Don't Use it for the whole of Redox
VS Code cannot grok the gestalt of Redox, so it doesn't work very well if you start it in your redox
base directory. It can be handy for editing recipes, config and make files. And if you want to see what you have changed in the Redox project, click on the Source Control icon on the far left, then select the file you want to compare against the repository.
Don't Build the System in a VS Code Terminal
In general, it's not recommended to do a system build from within VS Code. Use your Build
window. This gives you the flexibility to exit Code without terminating the build.
Porting Applications using Recipes
The Including Programs in Redox page gives an example to port/modify a pure Rust program, here we will explain the advanced way to port Rust programs, mixed Rust programs (Rust + C/C++ libraries, for example) and C/C++ programs.
(Before reading this page you must read the Build System Quick Reference page)
- Recipe
- Cookbook
- Sources
- Dependencies
- Building/Testing The Program
- Update crates
- Patch crates
- Cleanup
- Search Text On Recipes
- Search for functions on relibc
- Create a BLAKE3 hash for your recipe
- Submitting MRs
Recipe
A recipe is how we call a software port on Redox, on this section we will explain the recipe structure and things to consider.
Create a folder in cookbook/recipes
with a file named as recipe.toml
inside, we will edit this file to fit the program needs.
- Commands example:
cd ~/tryredox/redox
mkdir cookbook/recipes/program_example
nano cookbook/recipes/program_example/recipe.toml
The recipe.toml
example below is the supported recipe syntax, adapt for your use case.
[source]
git = "software-repository-link.git"
branch = "branch-name"
rev = "commit-revision"
tar = "software-tarball-link.tar.gz"
blake3 = "your-hash"
patches = [
"patch1.patch",
"patch2.patch",
]
[build]
template = "build-system"
dependencies = [
"library1",
"library2",
]
script = """
insert your script here
"""
[package]
dependencies = [
"runtime1",
"runtime2",
]
- Don't remove/forget the
[build]
section ([source]
section can be removed if you don't usegit =
andtar =
or have thesource
folder present on your recipe folder). - Insert
git =
to clone your software repository, if it's not available the build system will build the contents inside thesource
folder on recipe directory. - Insert
branch =
if your want to use other branch. - Insert
rev =
if you want to use a commit revision (SHA1). - Insert
tar =
to download/extract tarballs, this can be used instead ofgit =
. - Insert
blake3 =
to add BLAKE3 checksum verification for the tarball of your recipe. - Insert
patches =
to use patch files, they need to be in the same directory ofrecipe.toml
(not needed if your program compile/run without patches). - Insert
dependencies =
if your software have dependencies, to make it work your dependencies/libraries need their own recipes (if your software doesn't need this, remove it from yourrecipe.toml
). - Insert
script =
to run your custom script (script =
is enabled when you define yourtemplate =
ascustom
).
Note that there are two dependencies =
, one below the [build]
section and other below [package]
section.
- Below
[build]
- development libraries. - Below
[package]
- runtime dependencies or data files.
Quick Recipe Template
This is a recipe template for a quick porting workflow.
#TODO Not compiled or tested
[source]
tar = "tarball-link"
[build]
template = "build-system"
dependencies = [
"library1",
]
You can quickly copy/paste this template on each recipe.toml
, that way you spent less time writting and has less chances for typos.
If the program use a Git repository, you can easily rename the tar
to git
.
If the program don't need dependencies, you can quickly remove the dependencies = []
section.
After the #TODO
you will write your current port status.
Environment Variables
If you want to apply changes on the program source/binary you can use these variables on your commands:
${COOKBOOK_RECIPE}
- Represents the recipe folder.${COOKBOOK_SOURCE}
- Represents thesource
folder atcookbook/recipes/recipe-name/source
(program source).${COOKBOOK_SYSROOT}
- Represents thesysroot
folder atcookbook/recipes/recipe-name/target/${TARGET}
(library sources).${COOKBOOK_STAGE}
- Represents thestage
folder atcookbook/recipes/recipe-name/target/${TARGET}
(recipe binaries).
We recommend that you use these variables with the "
symbol to clean any spaces on the path, spaces are interpreted as command separators and will break the path.
Example:
"${VARIABLE_NAME}"
If you have a folder inside the variable folder you can call it with:
"${VARIABLE_NAME}"/folder-name
Or
"${VARIABLE_NAME}/folder-name"
Cookbook
The GCC/LLVM compiler frontends on Linux will use glibc
(GNU C Library) by default on linking, it will create Linux ELF binaries that don't work on Redox because glibc
don't use the Redox syscalls.
To make the compiler use the relibc
(Redox C Library), the Cookbook system needs to tell the build system of the software to use it, it's done with environment variables.
The Cookbook have templates to avoid custom commands, but it's not always possible because some build systems are customized or not adapted for Cookbook compilation.
(Each build system has different environment variables to enable cross-compilation and pass a custom C library for the compiler)
Custom Compiler
Cookbook use a custom GCC/LLVM/rustc with Redox patches to compile recipes with relibc
linking, you can check them here.
Cross Compilation
Cookbook default behavior is cross-compilation because it brings more flexiblity to the build system, it make the compiler use relibc
or compile to a different CPU architecture.
By default Cookbook respect the architecture of your host system but you can change it easily on your .config
file (ARCH?=
field).
(We recommend that you don't set the CPU architecture inside the recipe.toml
script field, because you lost flexibility, can't merge the recipe for CI server and could forget this custom setting)
Templates
The template is the type of the program/library build system, programs using an Autotools build system will have a configure
file on the root of the repository/tarball source, programs using CMake build system will have a CMakeLists.txt
file with all available CMake flags and a cmake
folder, programs using Meson build system will have a meson.build
file, Rust programs will have a Cargo.toml
file.
template = "cargo"
- compile withcargo
(Rust programs, you can't use thescript =
field).template = "configure"
- compile withconfigure
andmake
(you can't use thescript =
field).template = "custom"
- run your customscript =
field and compile (Any build system/installation process).
The script =
field runs any shell command, it's useful if the software use a script to build from source or need custom options that Cookbook don't support.
To find the supported Cookbook shell commands, look the recipes using a script =
field on their recipe.toml
or read the source code.
Custom Template
The "custom" template enable the script =
field to be used, this field will run any command supported by your shell.
Cargo script template
script = """
COOKBOOK_CARGO="${COOKBOOK_REDOXER}"
COOKBOOK_CARGO_FLAGS=(
--path "${COOKBOOK_SOURCE}"
--root "${COOKBOOK_STAGE}"
--locked
--no-track
)
function cookbook_cargo {
"${COOKBOOK_CARGO}" install "${COOKBOOK_CARGO_FLAGS[@]}"
}
"""
Configure script template
script = """
COOKBOOK_CONFIGURE_FLAGS+=(
--program-flag
)
cookbook_configure
"""
This script template is used for a GNU Autotools build system with flags, some programs need these flags for customization.
Change or copy/paste the "--program-flag" according to your needs.
CMake script template
script = """
COOKBOOK_CONFIGURE="cmake"
COOKBOOK_CONFIGURE_FLAGS=(
-DCMAKE_BUILD_TYPE=Release
-DCMAKE_CROSSCOMPILING=True
-DCMAKE_EXE_LINKER_FLAGS="-static"
-DCMAKE_INSTALL_PREFIX="/"
-DCMAKE_PREFIX_PATH="${COOKBOOK_SYSROOT}"
-DCMAKE_SYSTEM_NAME=Generic
-DCMAKE_SYSTEM_PROCESSOR="$(echo "${TARGET}" | cut -d - -f1)"
-DCMAKE_VERBOSE_MAKEFILE=On
"${COOKBOOK_SOURCE}"
)
cookbook_configure
"""
More CMake options can be added with a -D
before them, the customization of CMake compilation is very easy.
Cargo packages script template
script = """
cookbook_cargo_packages program-name
"""
(you can use cookbook_cargo_packages program1 program2
if it's more than one package)
This script is used for Rust programs that use folders inside the repository for compilation, you can use the folder name or program name.
This will fix the "found virtual manifest instead of package manifest" error.
Cargo package with flags
If you need a script for a package with flags (customized), you can use this script:
script = """
"${COOKBOOK_CARGO}" build \
--manifest-path "${COOKBOOK_SOURCE}/Cargo.toml" \
--package "${package-name}" \
--release
--add-your-flag-here
mkdir -pv "${COOKBOOK_STAGE}/bin"
cp -v \
"target/${TARGET}/release/${package-name}" \
"${COOKBOOK_STAGE}/bin/${recipe}_${package-name}"
"""
The --add-your-flag-here
will be replaced by your flag name and the parts with package-name
will be replaced by the real package.
Cargo flags script template
script = """
cookbook_cargo --features flag-name
"""
(you can use cookbook_cargo --features flag1 flag2
if it's more than one flag)
Some Rust softwares have Cargo flags for customization, search them to match your needs or make some program compile.
Disable the default Cargo flags
It's common that some flag of the program doesn't work on Redox, if you don't want to spend much time testing flags that work and don't work, you can disable all of them to see if the most basic setting of the program works with this script:
script = """
cookbook_cargo --no-default-features
"""
Enable all Cargo flags
If you want to enable all flags of the program, use:
script = """
cookbook_cargo --all-features
"""
Cargo examples script template
script = """
cookbook_cargo_examples example-name
"""
(you can use cookbook_cargo_examples example1 example2
if it's more than one example)
This script is used for examples on Rust programs.
Script template
Adapted scripts
This script is for scripts adapted to be packaged, they have shebangs and rename the file to remove the script extension.
- One script
script = """
mkdir -pv "${COOKBOOK_STAGE}"/bin
cp "${COOKBOOK_SOURCE}"/script-name "${COOKBOOK_STAGE}"/bin/script-name
chmod a+x "${COOKBOOK_STAGE}"/bin/script-name
"""
This script will move the script from the source
folder to the stage
folder and mark it as executable to be packaged.
(Probably you need to mark it as executable, we don't know if all scripts carry executable permission)
- Multiple scripts
script = """
mkdir -pv "${COOKBOOK_STAGE}"/bin
cp "${COOKBOOK_SOURCE}"/* "${COOKBOOK_STAGE}"/bin
chmod a+x "${COOKBOOK_STAGE}"/bin/*
"""
This script will move the scripts from the source
folder to the stage
folder and mark them as executable to be packaged.
Non-adapted scripts
You need to use these scripts for scripts not adapted for packaging, you need to add shebangs, rename the file to remove the script extension (.py
) and mark as executable (chmod a+x
).
- One script
script = """
mkdir -pv "${COOKBOOK_STAGE}"/bin
cp "${COOKBOOK_SOURCE}"/script-name.py "${COOKBOOK_STAGE}"/bin/script-name
chmod a+x "${COOKBOOK_STAGE}"/bin/script-name
"""
(Rename the "script-name" parts with your script name)
This script will rename your script name (remove the .py
extension, for example), make it executable and package.
- Multiple scripts
script = """
mkdir -pv "${COOKBOOK_STAGE}"/bin
for script in "${COOKBOOK_SOURCE}"/*
do
shortname=`basename "$script" ".py"`
cp -v "$script" "${COOKBOOK_STAGE}"/bin/"$shortname"
chmod a+x "${COOKBOOK_STAGE}"/bin/"$shortname"
done
"""
This script will rename all scripts to remove the .py
extension, mark all scripts as executable and package.
- Shebang
It's the magic behind executable scripts as it make the system interpret the script as an ELF binary, if your script doesn't have a shebang on the beginning it can't work as an executable program.
To fix this, use this script:
script = """
mkdir -pv "${COOKBOOK_STAGE}"/bin
cp "${COOKBOOK_SOURCE}"/script-name.py "${COOKBOOK_STAGE}"/bin/script-name
sed -i '1 i\#!/usr/bin/env python3' "${COOKBOOK_STAGE}"/bin/script-name
chmod a+x "${COOKBOOK_STAGE}"/bin/script-name
"""
The sed -i '1 i\#!/usr/bin/env python3' "${COOKBOOK_STAGE}"/bin/script-name
command will add the shebang on the beginning of your script.
The python3
is the script interpreter in this case, use bash
or lua
or whatever interpreter is appropriate for your case..
There are many combinations for these script templates, you can download scripts without the [source]
section, make customized installations, etc.
Sources
Tarballs
Tarballs are the most easy way to compile a software because the build system is already configured (GNU Autotools is the most used), while being more fast to download and process (the computer don't need to process Git deltas when cloning Git repositories).
Archives with tar.xz
and tar.bz2
tend to have higher compression level, thus smaller file size.
(In cases where you don't find official tarballs, GitHub tarballs will be available on the "Releases" and "Tags" pages with a tar.gz
name in the download button, copy this link and paste on the tar =
field of your recipe.toml
).
Your recipe.toml
will have this content:
[source]
tar = "tarball-link"
Links
Sometimes it's hard to find the official tarball of some software, as each project website organization is different.
To help on this process, the Arch Linux packages and AUR are the most easy repositories to find tarball links on the configuration of the packages/ports.
- Arch Linux packages - Search for your program, open the program page, see the "Package Actions" category on the top right position and click on the "Source Files" button, a GitLab page will open, open the
.SRCINFO
and search for the tarball link on the "source" fields of the file.
See this example.
- AUR - Search for your program, open the program page, go to the "Sources" section on the end of the package details.
Git Repositories
Some programs don't offer tarballs or make releases, thus you need to use the their Git repository.
Your recipe.toml
will have this content:
[source]
git = "repository-link.git"
Dependencies
A program dependency can be a library (a program that offer functions to some program), a runtime (a program that satisfy some program when it's executed) or a build tool (a program to build/configure some program).
Most C, C++ and Rust softwares place build tools/runtime together with development libraries (packages with -dev
suffix) in their "Build Instructions".
Example:
sudo apt-get install cmake libssl-dev
The cmake
package is the build system while the libssl-dev
package is the linker objects (.a
and .so
files) of OpenSSL.
The Debian package system bundle shared/static objects on their -dev
packages (other Linux distributions just bundle dynamic objects), while Redox will use the source code of the libraries.
(Don't use the .deb
packages to create recipes, they are adapted for the Debian environment)
You would need to create a recipe of the libssl-dev
and add on your recipe.toml
, while the cmake
package would need to be installed on your system.
Library dependencies will be added below the [build]
to keep the "static linking" policy, while some libraries/runtimes doesn't make sense to add on this section because they would make the program binary too big.
Runtimes will be added below the [package]
section (it will install the runtime during the package installation).
Mixed Rust programs have crates ending with -sys
to use C/C++ libraries of the system, sometimes they bundle them.
If you have questions about program dependencies, feel free to ask us on Chat.
If you want an easy way to find dependencies, see the Debian testing packages list.
You can search them with Ctrl+F
, all package names are clickable and their homepage is available on the right-side of the package description/details.
-
Debian packages are the most easy to find dependencies because they are the most used by software developers to describe "Build Instructions".
-
The compiler will build the development libraries as
.a
files (objects for static linking) or.so
files (objects for dynamic linking), the.a
files will be mixed in the final binary while the.so
files will be installed out of the binary (stored on the/lib
directory of the system).
(Linux distributions add a number after the .so
files to avoid conflicts on the /lib
folder when packages use different ABI versions of the same library, for example: library-name.so.6
)
(You need to do this because each software is different, the major reason is "Build Instructions" organization)
Bundled Libraries
Some programs have bundled libraries, using CMake or a Python script, the most common case is using CMake (emulators do this in most cases).
The reason for this can be more portability or a patched library with optimizations for a specific task of the program.
In some cases some bundled library needs a Redox patch, if not it will give a compilation error.
Most programs using CMake will try to detect the system libraries on the build environment, if not they will use the bundled libraries.
The "system libraries" on this case is the recipes specified on the dependencies = []
section of your recipe.toml
.
If you are using a recipe from the master
branch as dependency, check if you find a .patch
file on the recipe folder or if the recipe.toml
has a git =
field pointing to the Redox GitLab.
If you find one of these (or if you patched the recipe), you should specify it on the dependencies = []
section, if not you can use the bundled libraries without problems.
Generally programs with CMake use a -DUSE_SYSTEM
flag to control this behavior.
Environment Variables
Sometimes specify the library recipe on the dependencies = []
section is not enough, some build systems have environment variables to receive a custom path for external libraries.
When you add a library on your recipe.toml
the Cookbook will copy the library source code to the sysroot
folder at cookbook/recipes/recipe-name/target/${TARGET}
, this folder has an environment variable that can be used inside the script =
field on your recipe.toml
.
Example:
script = """
export OPENSSL_DIR="${COOKBOOK_SYSROOT}"
cookbook_cargo
"""
The `export` will active the `OPENSSL_DIR` variable on the environment, this variable is implemented by the program, it's a way to specify the custom OpenSSL path to the program's build system, as you can see, when the `òpenssl` recipe is added to the `dependencies = []` section its sources go to the `sysroot` folder.
Now the program build system is satisfied with the OpenSSL sources, the `cookbook_cargo` function calls Cargo to build it.
Programs using CMake don't use environment variables but a option, see this example:
```toml
script = """
COOKBOOK_CONFIGURE="cmake"
COOKBOOK_CONFIGURE_FLAGS=(
-DCMAKE_BUILD_TYPE=Release
-DCMAKE_CROSSCOMPILING=True
-DCMAKE_EXE_LINKER_FLAGS="-static"
-DCMAKE_INSTALL_PREFIX="/"
-DCMAKE_PREFIX_PATH="${COOKBOOK_SYSROOT}"
-DCMAKE_SYSTEM_NAME=Generic
-DCMAKE_SYSTEM_PROCESSOR="$(echo "${TARGET}" | cut -d - -f1)"
-DCMAKE_VERBOSE_MAKEFILE=On
-DOPENSSL_ROOT_DIR="${COOKBOOK_SYSROOT}"
"${COOKBOOK_SOURCE}"
)
cookbook_configure
"""
On this example the -DOPENSSL_ROOT_DIR
option will have the custom OpenSSL path.
Configuration
The determine the program dependencies you can use Arch Linux and Debian as reference, Arch Linux and AUR are the best methods because they separate the build tools from runtimes and libraries, thus you commit less mistakes.
Arch Linux/AUR
Each package page of some program has a "Dependencies" section on the package details, see the items below:
(make)
- Build tools (required to build the program)(optional)
- Programs/libraries to enchance the program functionality
The other items are runtime/library dependencies (without ()
).
See the Firefox package, for example.
Debian
Each Debian package page has dependency items, see below:
depends
- Necessary dependencies (it don't separate build tools from runtimes)recommends
- Expand the software functionality (optional)suggests
- Expand the software functionality (optional)enhances
- Expand the software functionality (optional)
(The recommends
, suggests
and enhances
items aren't needed to make the program work)
See the Firefox ESR package, for example.
Testing
-
Install the packages for your Linux distribution on the "Build Instructions" of the software, see if it builds on your system first (if packages for your distribution is not available, search for Debian/Ubuntu equivalents).
-
Create the dependency recipe and run
make r.dependency-name
and see if it don't give errors, if you get an error it can be a dependency that require patches, missing C library functions or build tools, try to investigate both methods until the recipe finish the build process successfully.
If you run make r.recipe-name
and it builds successfully, feel free to add the build tools on the bootstrap.sh script (for native builds) or the redox-base-containerfile configuration file (for Podman builds).
The bootstrap.sh
script and redox-base-containerfile
covers the build tools required by recipes on demo.toml
Building/Testing The Program
(Build on your Linux distribution before this step to see if all build system tools and development libraries are correct)
To build your recipe, run:
make r.recipe-name
To test your recipe, run:
make qemu
If you want to test from terminal, run:
make qemu vga=no
If the build process was successful the recipe will be packaged and don't give errors.
If you want to insert this recipe permanently in your QEMU image, add your recipe name below the last item in [packages]
on your TOML config (config/x86_64/your-config.toml
, for example).
- Example -
recipe-name = {}
orrecipe-name = "recipe"
(if you haveREPO_BINARY=1
in your.config
).
To install your compiled recipe on QEMU image, run make image
.
If you had a problem, use this command to log any possible errors on your terminal output:
make r.recipe-name 2>&1 | tee recipe-name.log
The recipe sources will be extracted/cloned on the source
folder inside of your recipe folder, the binaries go to target
folder.
Update crates
Sometimes the Cargo.lock
of some Rust programs can hold a crate version without Redox patches or broken Redox support (changes on code that make the target OS fail), this will give you an error during the recipe compilation.
The reason of fixed crate versions is explained here.
To fix this you will need to update the crates of your recipe after the first compilation and build it again, see the ways to do it below.
One or more crates
In maintained Rust programs you just need to update some crates to have Redox support (because they frequently update the crate versions), this will avoid random breaks on the dependency chain of the program (due to ABI changes) thus you can update one or more crates to reduce the chance of breaks.
We recommend that you do this based on the errors you get during the compilation, this method is recommended for maintained programs.
- Go to the
source
folder of your recipe and runcargo update -p crate-name
, example:
cd cookbook/recipes/recipe-name/source
cargo update -p crate1 crate2
cd -
make r.recipe-name
If you still get the error, run:
make c.recipe-name r.recipe-name
All crates
Most unmaintained Rust programs have very old crate versions without up-to-date Redox support, this method will update all crates of the dependency chain to the latest version.
Be aware that some crates break the ABI frequently and make the program stop to work, that's why you must try the "One crate" method first.
(Also good to test the latest improvements of the libraries)
- Go to the
source
folder of your recipe and runcargo update
, example:
cd cookbook/recipes/recipe-name/source
cargo update
cd -
make r.recipe-name
If you still get the error, run:
make c.recipe-name r.recipe-name
Verify the dependency tree
If you use the above methods but the program is still using old crate versions, see this section:
Patch crates
Redox forks
It's possible that some not ported crate have a Redox fork with patches, you can search the crate name here, generally the Redox patches stay in the redox
branch or redox-version
branch that follow the crate version.
To use this Redox fork on your Rust program, add this text on the end of the Cargo.toml
in the program source code:
[patch.crates-io]
crate-name = { git = "repository-link", branch = "redox" }
It will make Cargo replace the patched crate in the entire dependency chain, after that, run:
make r.recipe-name
Or (if the above doesn't work)
make c.recipe-name r.recipe-name
Or
cd cookbook/recipes/recipe-name/source
cargo update -p crate-name
cd -
make r.recipe-name
If you still get the error, run:
make c.recipe-name r.recipe-name
Local patches
If you want to patch some crate offline with your patches, add this text on the Cargo.toml
of the program:
[patch.crates-io]
crate-name = { path = "patched-crate-folder" }
It will make Cargo replace the crate based on this folder in the program source code - cookbook/recipes/your-recipe/source/patched-crate-folder
(you don't need to manually create this folder if you git clone
the crate source code on the program source directory)
Inside this folder you will apply the patches on the crate source and build the recipe.
Cleanup
If you have some problems (outdated recipe), try to run these commands:
- This command will wipe your old recipe binary/source.
make c.recipe-name u.recipe-name
- This script will delete your recipe binary/source and build (fresh build).
scripts/rebuild-recipe.sh recipe-name
Search Text on Recipes
To speed up your porting workflow you can use the grep
tool to search the recipe configuration:
cd cookbook/recipes
grep -rnw "text" --include "recipe.toml"
This command will search all match texts in the recipe.toml
files of each recipe folder.
Search for functions on relibc
Sometimes your program is not building because relibc lack the necessary functions, to verify if they are implemented, run these commands:
cd relibc
grep -nrw "function-name" --include "*.rs"
You will insert the function name in function-name
.
Create a BLAKE3 hash for your recipe
You need to create a BLAKE3 hash of your recipe tarball if you want to merge it on upstream, for this you can use the b3sum
tool, it can be installed from crates.io
with cargo install b3sum
.
After the first run of the make r.recipe-name
command, run these commands:
b3sum cookbook/recipes/recipe-name/source.tar
It will print the generated BLAKE3 hash, copy and paste on the blake3 =
field of your recipe.toml
.
Submitting MRs
If you want to add your recipe on Cookbook to become a Redox package on CI server, you can submit your merge request with proper dependencies and comments.
We recommend that you make a commit for each new recipe and is preferable that you test it before the MR, but you can send it non-tested with a #TODO
on the first line of the recipe.toml
file.
After the #TODO
comment you will explain what is missing on your recipe (apply a space after the TODO
, if you forget it the grep
can't scan properly), that way we can grep
for #TODO
and anyone can improve the recipe easily (don't forget the #
character before the text, the TOML syntax treat every text after this as a comment, not a code).
Porting Case Study
As a non-trivial example of porting a Rust app, let's look at what was done to port gitoxide. This port was already done, so it is now much simpler, but perhaps some of these steps will apply to you.
The goal when porting is to capture all the necessary configuration in recipes and scripts, and to avoid requiring a fork of the repo or upstreaming changes. This is not always feasible, but forking/upstreaming should be avoided when it can be.
We are using full pathnames for clarity, you don't need to.
Build on Linux
Before we start, we need to build the software for our Linux system and make sure it works. This is not part of the porting, it's just to make sure our problems are not coming from the Linux version of the software. We follow the normal build instructions for the software we are porting.
cd ~
git clone https://github.com/Byron/gitoxide.git
cd gitoxide
cargo run --bin ein
Set up the working tree
We start with a fresh clone of the Redox repository. In a Terminal/Console/Command window:
mkdir -p ~/redox-gitoxide
cd ~/redox-gitoxide
git clone git@gitlab.redox-os.org:redox-os/redox.git --origin upstream --recursive
The new recipe will be part of the cookbook
repository, so we need to fork then branch it. To fork the cookbook
repo:
- In the browser, go to Cookbook
- Click the
Fork
button in the upper right part of the page - Create a
public
fork under your gitlab user name (it's the only option that's enabled)
Then we need to set up our local cookbook
repo and create the branch. cookbook
was cloned when we cloned redox
, so we will just tweak that. In the Terminal window:
cd ~/redox-gitoxide/redox/cookbook
git remote rename origin upstream
git rebase upstream master
git remote add origin git@gitlab.redox-os.org:MY_USERNAME/cookbook.git
git checkout -b gitoxide-port
Create a Recipe
To create a recipe, we need to make a new directory in cookbook/recipes
with the name the package will have, in this case gitoxide
, and create a recipe.toml
file with a first-draft recipe.
mkdir -p ~/redox-gitoxide/redox/cookbook/recipes/gitoxide
cd ~/redox-gitoxide/redox/cookbook/recipes/gitoxide
nano recipe.toml
Start with the following content in the recipe.toml
file.
[source]
git = "https://github.com/Byron/gitoxide.git"
[build]
template = "cargo"
First Attempt
Next we attempt to build the recipe. Note that the first attempt may require the Redox toolchain to be updated, so we run make prefix
, which may take quite a while.
cd ~/redox-gitoxide/redox
make prefix
make r.gitoxide |& tee gitoxide.log
We get our first round of errors (among other messages):
error[E0425]: cannot find value `POLLRDNORM` in crate `libc`
error[E0425]: cannot find value `POLLWRBAND` in crate `libc`
Make a Local Copy of libc
We suspect the problem is that these items have not been defined in the Redox edition of libc
.
libc
is not a Redox crate, it is a rust-lang crate, but it has parts that are Redox-specific.
We need to work with a local copy of libc
, and then later ask someone with authority to upstream the required changes.
First, clone libc
into our gitoxide
directory.
cd ~/redox-gitoxide/redox/cookbook/recipes/gitoxide
git clone https://github.com/rust-lang/libc.git
Try to find the missing constants.
cd ~/redox-gitoxide/redox/cookbook/recipes/gitoxide/libc
grep -nrw "POLLRDNORM" --include "*.rs"
grep -nrw "POLLWRBAND" --include "*.rs"
Looks like the value is not defined for the Redox version of libc
. Let's see if it's in relibc
.
cd ~/redox-gitoxide/redox/relibc
grep -nrw "POLLRDNORM" --include "*.rs"
grep -nrw "POLLWRBAND" --include "*.rs"
Yes, both are already defined in relibc
, and after a bit of poking around, it looks like they have an implementation.
They just need to get published in libc
. Let's do that.
Make Changes to libc
Let's add our constants to our local libc
. We are not going to bother with git
because these changes are just for debugging purposes.
Copy the constant declarations from relibc
, and paste them in the appropriate sections of libc/src/unix/redox/mod.rs
.
In addition to copying the constants, we have to change the type c_short
to ::c_short
to conform to libc
style.
cd ~/redox-gitoxide/redox/cookbook/recipes/gitoxide
nano libc/src/unix/redox/mod.rs
We add the following lines to mod.rs
:
#![allow(unused)] fn main() { pub const POLLRDNORM: ::c_short = 0x040; pub const POLLRDBAND: ::c_short = 0x080; pub const POLLWRNORM: ::c_short = 0x100; pub const POLLWRBAND: ::c_short = 0x200; }
In order to test our changes, we will have to modify our gitoxide
clone for now.
Once the changes to libc
are upstreamed, we won't need a modified gitoxide
clone.
To avoid overwriting our work, we want to turn off future fetches of the gitoxide
source during build, so change recipe.toml
to comment out the source section: nano recipe.toml
.
#[source]
#git = "https://github.com/Byron/gitoxide.git"
[build]
template = "cargo"
We edit gitoxide
's Cargo.toml
so we use our libc
.
nano ~/redox-gitoxide/cookbook/recipes/gitoxide/source/Cargo.toml
After the [dependencies]
section, but before the [profile]
sections, add the following to Cargo.toml
:
[patch.crates-io]
libc = { path = "../libc" }
Bump the version number on our libc
, so it will take priority.
nano ~/redox-gitoxide/cookbook/recipes/gitoxide/libc/Cargo.toml
version = "0.2.143"
Update gitoxide
's Cargo.lock
.
cd ~/redox-gitoxide/redox/cookbook/recipes/gitoxide/source
cargo update
Make sure we have saved all the files we just edited, and let's try building.
cd ~/redox-gitoxide/redox
make r.gitoxide
Our libc
errors are solved! Remember, these changes will need to upstreamed by someone with the authority to make changes to libc
.
Post a request on the chat's Redox OS/MRs room to add the constants to libc
.
Creating a Custom Recipe
In looking at what is included in gitoxide
, we see that it uses OpenSSL, which has some custom build instructions described in the docs.
There is already a Redox fork of openssl
to add Redox as a target, so we will set up our environment to use that.
In order to do this, we are going to need a custom recipe. Let's start with a simple custom recipe, just to get us going.
Edit our previously created recipe, cookbook/recipes/gitoxide/recipe.toml
, changing it to look like this.
#[source]
#git = "https://github.com/Byron/gitoxide.git"
[build]
template = "custom"
script = """
printenv
"""
In this version of our recipe, we are just going to print the environment variables during cook
,
so we can see what we might make use of in our custom script.
We are not actually attempting to build gitoxide
.
Now, when we run make r.gitoxide
in ~/redox-gitoxide/redox
, we see some useful variables such as TARGET
and COOKBOOK_ROOT
.
Two key shell functions are provided by the custom script mechanism, cookbook_cargo
and cookbook_configure
.
If you need a custom script for building a Rust program, your script should set up the environment, then call cookbook_cargo
, which calls Redox's version of cargo
.
If you need a custom script for using a Makefile
, your script should set up the environment, then call cookbook_configure
.
If you have a custom build process, or you have a patch-and-build script, you can just include that in the script
section and not use either of the above functions.
If you are interested in looking at the code that runs custom scripts, see the function build()
in cookbook
's cook.rs.
Adding a dependency on openssl
ensures that the build of openssl
will happen before attempting to build gitoxide
, so we can trust that the library contents are in the target directory of the ssl package.
And we need to set the environment variables as described in the OpenSSL bindings crate docs.
Our recipe now looks like this:
#[source]
#git = "https://github.com/Byron/gitoxide.git"
[build]
dependencies = [
"openssl",
]
template = "custom"
script = """
export OPENSSL_DIR="${COOKBOOK_SYSROOT}"
export OPENSSL_STATIC="true"
cookbook_cargo
"""
Linker Errors
Now we get to the point where the linker is trying to statically link the program and libraries into the executable. This program, called ld
, will report errors if there are any undefined functions or missing static variable definitions.
undefined reference to `tzset'
undefined reference to `cfmakeraw'
In our case we find we are missing tzset
, which is a timezone function. We are also missing cfmakeraw
from termios
. Both of these functions are normally part of libc
. In our case, they are defined in the libc
crate, but they are not implemented by Redox's version of libc
, which is called relibc
. We need to add these functions.
Add Missing Functions to relibc
Let's set up to modify relibc
. As with cookbook
, we need a fork of relibc. Click on the Fork
button and add a public fork. Then update our local relibc
repo and branch.
cd ~/redox-gitoxide/redox/relibc
git remote rename origin upstream
git rebase upstream master
git remote add origin git@gitlab.redox-os.org:MY_USERNAME/relibc.git
git checkout -b gitoxide-port
Now we need to make our changes to relibc
...
After a fair bit of work, which we omit here, the functions tzset
and cfmakeraw
are implemented in relibc
. An important note is that in order to publish the functions, they need to be preceded with:
#[no_mangle]
extern "C" fn tzset() ...
Now let's build the system. The command touch relibc
changes the timestamp on the relibc
directory, which will cause the library to be updated. We then clean and rebuild gitoxide
.
cd ~/redox-gitoxide/redox
cd relibc
cargo update
cd ..
touch relibc
make prefix
make c.gitoxide
make r.gitoxide
Testing in QEMU
Now we need to build a full Redox image and run it in QEMU. Let's make a configuration file.
cd ~/redox-gitoxide/redox/config/x86_64
cp desktop.toml my_desktop.toml
nano my_desktop.toml
Note that the prefix "my_" at the beginning of the config file name means that it is gitignore'd, so it is preferred that you prefix your config name with "my_".
In my_desktop.toml
, at the end of the list of packages, after uutils = {}
, add
gitoxide = {}
Now let's tell make
about our new config definition, build the system, and test our new command.
cd ~/redox-gitoxide/redox
echo "CONFIG_NAME?=my_desktop.toml" >> .config
make rebuild
make qemu
Log in to Redox as user
with no password, and type gix clone https://gitlab.redox-os.org/redox-os/website.git
.
We get some errors, but we are making progress.
Submitting the MRs
- Before committing our new recipe, we need to uncomment the
[source]
section. Edit~/redox-gitoxide/redox/cookbook/recipes/gitoxide/recipe.toml
to remove the#
from the start of the first two lines. - We commit our changes to
cookbook
to include the newgitoxide
recipe and submit an MR, following the instructions Creating Proper Pull Requests. - We commit our changes to
relibc
. We need to rebuild the system and test it thoroughly in QEMU, checking anything that might be affected by our changes. Once we are confident in our changes, we can submit the MR. - We post links to both MRs on Redox OS/MRs to ensure they get reviewed.
- After making our changes to
libc
and testing them, we need to request to have those changes upstreamed by posting a message on Redox OS/MRs. If the changes are complex, please create an issue here and include a link to it in your post.
Developer FAQ
The website FAQ have questions/answers for newcomers and end-users, while this FAQ will cover organization/technical questions/answers of developers/testers, feel free to suggest new questions.
(If all else fails, join us on Chat)
- General Questions
- What is the correct way to update the build system?
- How can I verify if my build system is up-to-date?
- How can I test my changes on real hardware?
- How can I write a driver?
- How can I port a program?
- How can I debug?
- How can I insert files to the QEMU image?
- How can I change my build variant?
- How can I increase the filesystem size of my QEMU image?
- How can I change the CPU architecture of my build system?
- I only made a small change to my program. What's the quickest way to test it in QEMU?
- How can I install the packages needed by recipes without a new download of the build system?
- How can I use the packages from the CI server on my build system?
- How can I cross-compile to ARM from a x86-64 computer?
- How can I build the toolchain from source?
- Why does Redox have Assembly code?
- Why does Redox do cross-compilation?
- Does Redox support OpenGL/Vulkan?
- Porting Questions
- Scheme Questions
- What is a scheme?
- When does a regular program need to use a scheme?
- When would I write a program to implement a scheme?
- How do I use a scheme for sandboxing a program?
- How can I see all user-space schemes?
- How can I see all kernel schemes?
- What is the difference between kernel and user-space schemes?
- User-Space Questions
- Kernel Questions
- GitLab Questions
- I have a project/program with breaking changes but my merge request was not accepted, can I maintain the project in a separated repository on the Redox GitLab?
- I have a merge request with many commits, should I squash them after merge?
- Should I delete my branch after merge?
- How can I have an anonymous account?
- Troubleshooting Questions
General Questions
What is the correct way to update the build system?
- Read this page.
How can I verify if my build system is up-to-date?
- After the
make pull
command, run thegit rev-parse HEAD
command on the build system folders to see if they match the latest commit hash on GitLab.
How can I test my changes on real hardware?
- Make sure your build system is up-to-date and use the
make live
command to create a bootable image of your changes.
How can I write a driver?
- Read this README.
How can I port a program?
- Read this page.
How can I debug?
- Read this section.
How can I insert files to the QEMU image?
- If you use a recipe your changes will persist after a
make image
but you can also mount the Redox filesystem.
How can I change my build variant?
- Insert the
CONFIG_NAME?=your-config-name
environment variable to your.config
file, read this section for more details.
How can I increase the filesystem size of my QEMU image?
- Change the
filesystem_size
field of your build configuration (config/ARCH/your-config.toml
) and runmake image
, read this section for more details.
How can I change the CPU architecture of my build system?
- Insert the
ARCH?=your-arch-code
environment variable on your.config
file and runmake all
, read this section for more details.
I only made a small change to my program. What's the quickest way to test it in QEMU?
- If you already added the program recipe to your Cookbook configuration file, run:
make r.recipe-name image qemu
How can I install the packages needed by recipes without a new download of the build system?
- Download the
bootstrap.sh
script and run:
./bootstrap.sh -d
How can I use the packages from the CI server on my build system?
- Go to your Cookbook configuration and add the binary variant of the recipe.
nano config/your-cpu-arch/your-config.toml
[packages]
...
recipe-name = "binary"
...
- Run
make rebuild
to download/install the package.
How can I cross-compile to ARM from a x86-64 computer?
- Insert the
ARCH?=aarch64
environment variable on your.config
file and runmake all
.
How can I build the toolchain from source?
- Disable the
PREFIX_BINARY
environment variable inside of your.config
file.
nano .config
PREFIX_BINARY?=0
- Wipe the old toolchain binaries and build a new one.
rm -rf prefix
make prefix
- Wipe the old recipe binaries and build again with the new toolchain.
make clean all
Why does Redox have Assembly code?
Assembly is the core of low-level because it's a CPU-specific language and deal with things that aren't possible or feasible to do in high-level languages like Rust.
Sometimes required or preferred for accessing hardware, or for carefully optimized hot spots.
Reasons to use Assembly instead of Rust:
- Deal with low-level things (those that can't be handled by Rust)
- Writing constant time algorithms for cryptography
- Optimizations
Places where Assembly is used:
kernel
- interrupt and system call entry routines, context switching, special CPU instructions and registers.drivers
- port IO need special instructions (x86_64).relibc
- some parts of the C runtime.
Why does Redox do cross-compilation?
As Redox is not ready for development or daily usage yet, the programs need to be built outside of Redox and installed on the image.
The cross-compilation also reduce the portability requiirements of the program, because the build tools don't need to work on Redox, only on Linux/BSD.
Does Redox support OpenGL/Vulkan?
- Read this section.
Porting Questions
What is a recipe?
- A recipe is a software port on Redox, it does cross-compilation by default if you use Cookbook templates.
How to determine the dependencies of some program?
- Read this section.
How can I configure the build system of the recipe?
- Read this category.
How can I search for functions on relibc?
- Read this section.
Which are the upstream requirements to accept my recipe?
- Read this.
Scheme Questions
What is a scheme?
- Read this page.
When does a regular program need to use a scheme?
- Most schemes are used internally by the system or by relibc, you don't need to access them directly. One exception is the pseudoterminal for your command window, which is accessed using the value of
$TTY
, which might have a value of e.g. "pty:18". Some low-level graphics programming might require you to access your display, which might have a value of e.g. "display:3".
When would I write a program to implement a scheme?
- If you are implementing a service daemon or a device driver, you will need to implement a scheme.
How do I use a scheme for sandboxing a program?
- The contain program provides a partial implementation of sandboxing using schemes and namespaces.
How can I see all user-space schemes?
- Read this section.
How can I see all kernel schemes?
- Read this section.
What is the difference between kernel and user-space schemes?
- Read this section.
User-Space Questions
How does a user-space daemon provide file-like services?
- When a regular program calls
open
,read
,write
, etc. on a file-like resource, the kernel translates that to a message of typesyscall::data::Packet
, describing the file operation, and makes it available for reading on the appropriate daemon's scheme file descriptor. See this section for more information.
Kernel Questions
Which CPU architectures the kernel support?
- i686 with limitations
- x86_64
- ARM64 with limitations
How the system calls are used by user-space daemons?
- All user-space daemons use the system calls through relibc like any normal program.
GitLab Questions
I have a project/program with breaking changes but my merge request was not accepted, can I maintain the project in a separated repository on the Redox GitLab?
- Yes.
I have a merge request with many commits, should I squash them after merge?
- Yes.
Should I delete my branch after merge?
- Yes.
How can I have an anonymous account?
- During the account creation process you should add a fake name on the "First Name" and "Last Name" fields and change it later after your account approval (single name field is supported).
Troubleshooting Questions
Scripts
I can't download the bootstrap scripts, how can I fix this?
- Verify if you have
curl
installed or download the script from your browser.
I tried to run the bootstrap.sh and podman_bootstrap.sh scripts but got an error, how to fix this?
- Verify if you have the GNU Bash shell installed on your system.
Build System
I called "make all" but it show a "rustup can't be found" message, how can I fix this?
- Run this command:
source ~/.cargo/env
(If you installed Rustup before the first bootstrap.sh
run, this error doesn't happen)
I tried all troubleshooting methods but my build system is still broken, how can I fix that?
- If
make clean pull all
doesn't work, run thebootstrap.sh
again to download a fresh build system or install Pop OS!, Ubuntu or Debian.
Recipes
I had a compilation error with a recipe, how can I fix that?
- Read this section.
I tried all methods of the "Troubleshooting the Build" page and my recipe doesn't build, what can I do?
- It happens because your system has an environment problem or missing packages, remove the recipe from your build configuration file to workaround this.
All recipes follow this syntax recipe = {}
below the [packages]
section, the configuration files is placed at config/your-arch
.
When I run make r.recipe I get a syntax error, how can I fix that?
- Verify if your
recipe.toml
has some typo.
QEMU
How can I kill a frozen QEMU process after a kernel panic?
- Read this section.
Quick Workflow
This page will describe the most quick testing/development workflow for people that want an unified list to do things.
You need to fully understand the build system to use this workflow, as it don't give detailed explanation of each command to save time and space
- Install Rust Nightly
- Update Rust
- Download a new build system copy
- Install the required packages for the build system
- Download and run the "bootstrap.sh" script
- Download and build the toolchain and recipes
- Update the build system and its submodules
- Update the toolchain and relibc
- Update recipes and the QEMU image
- Update everything
- Wipe the toolchain and build again
- Wipe all sources/binaries of the build system and download/build them again
- Use the "myfiles" recipe to insert your files on the QEMU image
- Comment out a recipe from the build configuration
- Create logs
- Enable a source-based toolchain
- Build the toolchain from source
- Download and build some Cookbook configuration for some CPU architecture
Install Rust Nightly
curl https://sh.rustup.rs -sSf | sh -s -- --default-toolchain nightly
Use Case: Configure the host system without the bootstrap.sh
script.
Update Rust
rustup update
Use Case: Try to fix Rust problems.
Download a new build system copy
git clone https://gitlab.redox-os.org/redox-os/redox.git --origin upstream --recursive
Use Case: Commonly used when breaking changes on upstream require a new build system copy.
Install the required packages for the build system
curl -sf https://gitlab.redox-os.org/redox-os/redox/raw/master/bootstrap.sh -o bootstrap.sh
bash -e bootstrap.sh -d
Use Case: Install new build tools for recipes or configure the host system without the bootstrap.sh
script.
Download and run the "bootstrap.sh" script
curl -sf https://gitlab.redox-os.org/redox-os/redox/raw/master/bootstrap.sh -o bootstrap.sh
bash -e bootstrap.sh
Use Case: Commonly used when breaking changes on upstream require a new build system copy.
Download and build the toolchain and recipes
cd redox
make all
Use Case: Create a new build system copy after a breaking change on upstream.
Update the build system and its submodules
make pull
Use Case: Keep the build system up-to-date.
Update the toolchain and relibc
touch relibc
make prefix
Use Case: Keep the toolchain up-to-date.
Update recipes and the QEMU image
make rebuild
Use Case: Keep the build system up-to-date.
Update everything
curl -sf https://gitlab.redox-os.org/redox-os/redox/raw/master/bootstrap.sh -o bootstrap.sh
bash -e bootstrap.sh -d
rustup update
make pull
touch relibc
make prefix
make rebuild
Use Case: Try to fix any problem caused by outdated programs, toolchain and build system sources.
Wipe the toolchain and build again
rm -rf prefix
make prefix
Use Case: Commonly used to fix problems.
Wipe the toolchain/recipe binaries and build them again
make clean all
Use Case: Commonly used to fix unknown problems or update the build system after breaking changes on upstream.
Wipe all sources/binaries of the build system and download/build them again
make distclean all
Use Case: Commonly used to fix unknown problems or update the build system after breaking changes.
Use the "myfiles" recipe to insert your files on the QEMU image
mkdir cookbook/recipes/other/myfiles/source
nano config/your-arch/your-config.toml
myfiles = {}
make myfiles image
Use Case: Quickly insert files on the QEMU image or keep files between rebuilds.
Comment out a recipe from the build configuration
nano config/your-cpu-arch/your-config.toml
#recipe-name = {}
Use Case: Mostly used if some default recipe is broken.
Create logs
make some-command 2>&1 | tee file-name.log
Use Case: Report errors.
Enable a source-based toolchain
echo "PREFIX_BINARY?=0" >> .config
make prefix
Use Case: Build the latest toolchain sources or fix toolchain errors.
Build the toolchain from source
make prefix PREFIX_BINARY=0
Use Case: Test the toolchain sources.
Download and build some Cookbook configuration for some CPU architecture
make all CONFIG_NAME=your-config ARCH=your-cpu-arch
Use Case: Quickly build Redox variants without manual intervention on configuration files.
Libraries and APIs
This page will cover the context of the libraries and APIs on Redox.
- Versions
- [Interfaces]
- Code Porting
- Compiling for Redox
Terms:
- API - The interface of the library source code (the programs use the API to obtain the library functions).
- ABI - The interface between the program binary and the system services (normally the system call interface).
Versions
The Redox crates follow the SemVer model from Cargo ofr version numbers (except redox_syscall), you can read more about it below:
Redox
This section covers the versioning system of Redox and important components.
- Redox OS -
x.y.z
x
is ABI version, y
is API updates with backward compatibility and z
is fixes with backward compatiblity.
-
libredox - Currently it don't follow the SemVer model but will in the future.
-
redox_syscall -
x.y.z
x
is the ABI version (it will remain 0 for a while), y
is the API updates and z
is fixes (no backward compatibility).
Providing a Stable ABI
The implementation of a stable ABI is important to avoid frequent recompilation when an operating system is under heavy development, thus improving the development speed.
A stable ABI typically reduces development speed for the ABI provider (because it needs to uphold backward compatibility), whereas it improves development speed for the ABI user. Because relibc will be smaller than the rest of Redox, this is a good tradeoff, and improves development speed in general
It also offer backward compatibility for binaries compiled with old API versions.
Currently, only libredox will have a stable ABI, relibc will be unstable only as long as it's under heavy development and redox_syscall will remain unstable even after the 1.0 version of Redox.
Our final goal is to keep the Redox ABI stable in all 1.x
versions, if an ABI break happens, the next versions will be 2.x
A program compiled with an old API version will continue to work with a new API version, in most cases statically linked library updates or program updates will require recompilation, while in others a new ABI version will add performance and security improvements that would recommend a recompilation of the program.
Interfaces
Redox uses different mechanisms, compared to Linux, to implement system capabilities.
relibc
relibc is an implementation of the C Standard Library (libc) in Rust.
relibc knows if it's compiled for Linux or Redox ahead-of-time, if the target is Redox, relibc calls functions in libredox, the goal is to organize platform-specific functionality into clean modules.
The current dynamic linking support is under development, thus relibc is statically linked, once it's working, the programs will access relibc using dynamic linking, thus the functions used by the program will be linked during runtime (executable launch).
This will allow Redox to evolve and improve relibc without requiring programs to be recompiled after each source code change in most cases, if the dynamic linker can't resolve the references of the program binary, a recompilation is required.
Since Redox and Linux executables look so similar and can accidentally be executed on the other platform, it checks that it's running on the same platform it was compiled for, at runtime.
(C/C++ programs and libraries will use this library)
libredox
libredox is a system library for Redox components and Rust programs/libraries, it will allow Rust programs to limit their need to use C-style APIs (the relibc API and ABI).
It's both a crate (calling the ABI functions) and an ABI, the ABI is provided from relibc while the crate (library) is a wrapper above the libredox ABI.
(Redox components, Rust programs and libraries will use this library)
An ongoing migration from redox_syscall to libredox is in progress, you can follow the current status on this link.
You can see Rust crates using it on this link.
redox_syscall
redox_syscall is a system call wrapper with a Rust API for low-level components and libraries.
(redox_syscall should not be used directly by programs, use libredox instead)
Code Porting
Rust std crate
Most Rust programs include the std crate, In addition to implementing standard Rust abstractions, this crate provides a safe Rust interface to system functionality in libc, which it invokes via a FFI to libc.
std
has mechanisms to enable operating system variants of certain parts of the library, the file sys/mod.rs selects the appropriate variant to include, programs use the std::
prefix to call this crate.
To ensure portability of programs, Redox supports the Rust std
crate, for Redox, std::sys
refers to std::sys::unix
.
Redox-specific code can be found on this repository.
For most functionality, Redox uses #[cfg(unix)]
and sys/unix.
Some Redox-specific functionality is enabled by #[cfg(target_os = "redox")]
.
Compiling for Redox
The Redox toolchain automatically links programs with relibc in place of the libc you would find on Linux.
Porting Method
You can use #[cfg(unix)]
and #[cfg(target_os = "redox")]
to guard platform specific code.
Contributing
Now that you are ready to contribute to Redox, have a look at our CONTRIBUTING for suggestions about where to contribute.
Please follow our guidelines for Using Redox GitLab and our Best Practices.
Best Practices and Guidelines
These are a set of best practices to keep in mind when making a contribution to Redox. As always, rules are made to be broken, but these rules in particular play a part in deciding whether to merge your contribution (or not). So do try to follow them.
Literate programming
Literate programming is an approach to programming where the source code serves equally as:
- The complete description of the program, that a computer can understand
- The program's manual for the human, that an average human can understand
Literate programs are written in such a way that humans can read them from front to back, and understand the entire purpose and operation of the program without preexisting knowledge about the programming language used, the architecture of the program's components, or the intended use of the program. As such, literate programs tend to have lots of clear and well-written comments. In extreme cases of literate programming, the lines of "code" intended for humans far outnumbers the lines of code that actually gets compiled!
Tools can be used to generate documentation for human use only based on the original source code of a program. The rustdoc
tool is a good example of such a tool. In particular, rustdoc
uses comments with three slashes ///
, with special sections like # Examples
and code blocks bounded by three backticks. The code blocks can be used to writeout examples or unit tests inside of comments. You can read more about rustdoc
here.
Writing Documentation Correctly (TM)
Documentation for Redox appears in two places:
- In the source code
- On the website (the Redox Book and online API documentation)
Redox functions and modules should use rustdoc
annotations where possible, as they can be used to generate online API documentation - this ensures uniform documentation between those two halves. In particular, this is more strictly required for public APIs; internal functions can generally eschew them (though having explanations for any code can still help newcomers to understand the codebase). When in doubt, making code more literate is better, so long as it doesn't negatively affect the functionality. Run rustdoc
against any added documentation of this type before submitting them to check for correctness, errors, or odd formatting.
Documentation for the Redox Book generally should not include API documentation directly, but rather cover higher-level overviews of the entire codebase, project, and community. It is better to have information in the Book than not to have it, so long as it is accurate, relevant, and well-written. When writing documentation for the Book, be sure to run mdbook
against any changes to test the results before submitting them.
Rust Style
Since Rust is a relatively small and new language compared to others like C, there's really only one standard. Just follow the official Rust standards for formatting, and maybe run rustfmt on your changes, until we setup the CI system to do it automatically.
Rusting Properly
Some general guidelines:
- Use
std::mem::replace
andstd::mem::swap
when you can. - Use
.into()
and.to_owned()
over.to_string()
. - Prefer passing references to the data over owned data. (Don't take
String
, take&str
. Don't takeVec<T>
take&[T]
). - Use generics, traits, and other abstractions Rust provides.
- Avoid using lossy conversions (for example: don't do
my_u32 as u16 == my_u16
, prefermy_u32 == my_u16 as u32
). - Prefer in place (
box
keyword) when doing heap allocations. - Prefer platform independently sized integer over pointer sized integer (
u32
overusize
, for example). - Follow the usual idioms of programming, such as "composition over inheritance", "let your program be divided in smaller pieces", and "resource acquisition is initialization".
- When
unsafe
is unnecessary, don't use it. 10 lines longer safe code is better than more compact unsafe code! - Be sure to mark parts that need work with
TODO
,FIXME
,BUG
,UNOPTIMIZED
,REWRITEME
,DOCME
, andPRETTYFYME
. - Use the compiler hint attributes, such as
#[inline]
,#[cold]
, etc. when it makes sense to do so. - Try to banish
unwrap()
andexpect()
from your code in order to manage errors properly. Panicking must indicate a bug in the program (not an error you didn't want to manage). If you cannot recover from an error, print a nice error to stderr and exit. Check Rust's book about Error Handling.
Avoiding Panics
Panics should be avoided in kernel, and should only occur in drivers and other services when correct operation is not possible, in which case it should be a call to panic!()
.
Please also read the kernel README for kernel-specific suggestions.
Testing Practices
-
It's always better to test boot (
make qemu
ormake virtualbox
) every time you make a change, because it is important to see how the OS boots and works after it compiles. -
Even though Rust is a safety-oriented language, something as unstable and low-level as an in-dev operating system will almost certainly have problems in many cases and may completely break on even the slightest critical change.
-
Also, make sure you check how the unmodified version runs on your machine before making any changes. Else, you won't have anything to compare to, and it will generally just lead to confusion. TLDR: Rebuild and test boot often.
Using Redox GitLab
The Redox project is hosted here: Redox GitLab. You can download or clone the Redox source from there. However, if you wish to contribute, you will need a Redox Gitlab account.
This chapter provides an overview of Redox GitLab, how to get access, and how to use it as a Redox contributor.
Signing in to GitLab
Joining Redox GitLab
You don't need to join our GitLab to build Redox, but you will if you want to contribute. Obtaining a Redox account requires approval from a GitLab administrator, because of the high number of spam accounts (bots) that are created on this type of project. To join, first, go to Redox GitLab and click the Sign In/Register button. Create your User ID and Password. Then, send an message to the GitLab Approvals room indicating your GitLab User ID and requesting that your account be approved. Please give a brief statement about what you intend to use the account for. This is mainly to ensure that you are a genuine user.
The approval of your GitLab account may take some minutes or hours, in the meantime, join us on Chat and let us know what you are working on.
Setting up 2FA
Your new GitLab account will not require 2 Factor Authentication at the beginning, but it will eventually insist. Some details and options are described in detail below.
Using SSH for your Repo
When using git
commands such as git push
, git
may ask you to provide a password. Because this happens frequently, you might wish to use SSH
authentication, which will bypass the password step. Please follow the instructions for using SSH
here. ED25519 is a good choice. Once SSH is set up, always use the SSH version of the URL for your origin
and remote
. e.g.
- HTTPS:
git clone https://gitlab.redox-os.org/redox-os/redox.git --origin upstream --recursive
- SSH:
git clone git@gitlab.redox-os.org:redox-os/redox.git --origin upstream --recursive
2FA Apps
Requirements Before Logging Into GitLab
Before logging-in, you'll need:
- your web browser open at Redox GitLab
- your phone
- your 2FA App installed on your phone.
- to add https://gitlab.redox-os.org/redox-os/ as a site in your 2FA App. Once added and the site listed, underneath you'll see 2 sets of 3 digits, 6 digits in all. i.e. 258 687. That's the 2FA Verification Code. It changes every so often around every minute.
Available 2FA Apps for Android
On Android, you may use:
- Aegis Authenticator - F-Droid/Play Store
- Google Authenticator
Available 2FA Apps for iPhone
On iPhone iOS, you may use:
Logging-In With An Android Phone
Here are the steps:
- From your computer web browser, open the Redox GitLab
- Click the Sign In button
- Enter your username/email
- Enter your password
- Click the Submit button
- Finally you will be prompted for a 2FA verification code from your phone. Go to your Android phone, go to Google/Aegis Authenticator, find the site gitlab redox and underneith those 6 digits in looking something like 258 687 that's your 2FA code. Enter those 6 digits into the prompt on your computer. Click Verify. Done. You're logged into Gitlab.
Logging-In With An iPhone
Here are the steps:
- From your computer web browser, open the Redox GitLab
- Click the Sign In button
- Enter your username/email
- Enter your password
- Click the Submit button
- Finally you will be prompted for a 2FA verification code from your phone. Go to your iPhone, go to 2stable/Tofu Authenticator or to your Settings->Passwords for iOS Authenticator, find the site gitlab redox and underneath those 6 digits in looking something like 258 687 that's your 2FA code. Enter those 6 digits into the prompt on your computer. Click Verify. Done. You're logged into Gitlab.
Repository Structure
Redox GitLab consists of a large number of Projects and Subprojects. The difference between a Project and a Subproject is somewhat blurred in Redox GitLab, so we generally refer to the redox
project as the main project and the others as subprojects. On the Redox GitLab website, you will find the projects organized as a very large, flat alphabetical list. This is not indicative of the role or importance of the various projects.
The Redox Project
The redox
project is actually just the root of the build system. It does not contain any of the code that the final Redox image will include. It includes the Makefiles, configuration files, and a few scripts to simplify setup and building. The redox
project can be found here.
Doing a git clone
of redox.git
with --recursive
fetches the full build system, as described in the .gitmodules
file. The submodules are referred to using an SHA to identify what commit to use, so it's possible that your fetched subprojects do not have the latest from their master
branch. Once the latest SHA reference is merged into redox
, you can update to get the latest version of the subproject.
Packages and Recipes
The many packages that are assembled into the Redox image are built from the corresponding subprojects. The name of a Redox package almost always matches the name of its subproject, although this is not enforced.
The recipe for a Redox package contains the instructions to fetch and build the package, for its inclusion in the Redox image. The recipe is stored with the Cookbook, not with with package.
Cookbook
The cookbook
subproject contains the mechanism for building the Redox packages. It also contains the recipes. If a recipe is modified, it is updated in the cookbook
subproject. In order for the updated recipe to get included in your fetched cookbook, the redox
project needs to be updated with the new cookbook
SHA. Connect with us on Chat if a recipe is not getting updated.
Crates
Some subprojects are built as Crates, and included in Redox packages using Cargo's package management system. Updates to a crate subproject must be pushed to the crate repository in order for it to be included in your build.
Forks, Tarballs and Other Sources
Some recipes obtain source code from places other than Redox GitLab. The cookbook mechanism can pull in source from any git URL. It can also obtain source tarballs, as is frequently the case for non-Rust applications.
In some cases, the Redox GitLab has a fork of another repository, in order to add Redox-specific patches. Where possible, we try to push these changes upstream, but there are many reasons why this might not be feasible.
Personal Forks
When you are contributing to Redox, you are expected to make your changes in a Personal Fork of the relevant project, then create a Merge Request (PR) to have your changes pulled from your fork into the master. Note that your personal fork is required to have public visibility.
In some rare situations, e.g. for experimental features or projects with licensing that is not compatible with Redox, a recipe may pull in sources located in a personal repository. Before using one of these recipes, please check with us on Chat to understand why the project is set up this way, and do not commit a Redox config file containing such a recipe without permission.
Creating Proper Bug Reports
If you identify a problem with the system that has not been identified previously, please create a GitLab Issue. In general, we prefer that you are able to reproduce your problem with the latest build of the system.
-
Make sure the code you are seeing the issue with is up to date with
upstream/master
. This helps to weed out reports for bugs that have already been addressed. -
Search Redox Issues to see if a similar problem has been reported before. Then search outstanding merge requests to see if a fix is pending.
-
Make sure the issue is reproducible (trigger it several times). Try to identify the minimum number of steps to reproduce it. If the issue happens inconsistently, it may still be worth filing a bug report for it, but indicate approximately how often the bug occurs.
-
If it is a significant problem, join us on Chat and ask if it is a known problem, or if someone plans to address it in the short term.
-
Identify the recipe that is causing the issue. If a particular command is the source of the problem, look for a repository on Redox GitLab with the same name. Or, for certain programs such as
games
or command line utilities, you can search for the package containing the command withgrep -rnw COMMAND --include Cargo.toml
, whereCOMMAND
is the name of the command causing the problem. The location of theCargo.toml
file can help indicate which recipe contains the command. This is where you should expect to report the issue. -
If the problem involves multiple recipes, kernel interactions with other programs, or general build problems, then you should plan to log the issue against the
redox
repository. -
If the problem occurs during build, record the build log using
script
ortee
, e.g.make r.recipe-name 2>&1 | tee recipe-name.log
If the problem occurs while using the Redox command line, use
script
in combination with your Terminal window.tee qemu.log
make qemu
- Wait for Redox to start, then in this window
redox login: user
- Execute the commands to demonstrate the bug
- Terminate QEMU
sudo shutdown
- If shutdown does not work (there are known bugs) then
- Use the QEMU menu to quit
- Then exit the shell created by script
exit
-
Join us in the chat.
-
Record build information like:
- The rust toolchain you used to build Redox
rustc -V
and/orrustup show
from your Redox project folder
- The commit hash of the code you used
git rev-parse HEAD
- The environment you are running Redox in (the "target")
qemu-system-x86_64 -version
or your actual hardware specs, if applicable
- The operating system you used to build Redox
uname -a
or an alternative format
- The rust toolchain you used to build Redox
-
Format your log on the message in Markdown syntax to avoid a flood on the chat, you can see how to do it here.
-
Make sure that your bug doesn't already have an issue on GitLab. Feel free to ask in the Redox Chat if you're uncertain as to whether your issue is new.
-
Create a GitLab issue following the template. Non-bug report issues may ignore this template.
-
Watch the issue and be available for questions.
Creating Proper Pull Requests
It's completely fine to just submit a small pull request without first making an issue, but if it's a big change that will require a lot of planning and reviewing, it's best you start with writing an issue first.
The steps given below are for the main Redox project repository - submodules and other projects may vary, though most of the approach is the same.
If you marked your MR as ready don't add new commits, because it will trouble the Jeremy's review, making him lost time by reading the text again
If you need to add new commits mark the MR as draft again
Using Git in terminal
- In an appropriate directory, e.g.
~/tryredox
, clone the Redox repository to your computer using one of the following commands:
-
HTTPS:
git clone https://gitlab.redox-os.org/redox-os/redox.git --origin upstream --recursive
-
SSH:
git clone git@gitlab.redox-os.org:redox-os/redox.git --origin upstream --recursive
-
Use HTTPS if you don't know which one to use. (Recommended: learn about SSH if you don't want to have to login every time you push/pull!)
-
If you used
bootstrap.sh
(see Building Redox), thegit clone
was done for you and you can skip this step.
-
Change to the newly created redox directory and rebase to ensure you're using the latest changes:
cd redox
git rebase upstream master
-
You should have a fork of the repository on GitLab and a local copy on your computer. The local copy should have two remotes;
upstream
andorigin
,upstream
should be set to the main repository andorigin
should be your fork. Log into Redox Gitlab and fork the Repository - look for the button in the upper right. -
Add your fork to your list of git remotes with
-
HTTPS:
git remote add origin https://gitlab.redox-os.org/MY_USERNAME/redox.git
-
SSH:
git remote add origin git@gitlab.redox-os.org:MY_USERNAME/redox.git
-
Note: If you made an error in your
git remote
command, usegit remote remove origin
and try again.
-
Alternatively, if you already have a fork and copy of the repo, you can simply check to make sure you're up-to-date. Fetch the upstream, rebase with local commits, and update the submodules:
git fetch upstream master
git rebase upstream/master
git submodule update --recursive --init
Usually, when syncing your local copy with the master branch, you will want to rebase instead of merge. This is because it will create duplicate commits that don't actually do anything when merged into the master branch.
-
Before you start to make changes, you will want to create a separate branch, and keep the
master
branch of your fork identical to the main repository, so that you can compare your changes with the main branch and test out a more stable build if you need to. Create a separate branch:git checkout -b MY_BRANCH
-
Make your changes and test them.
-
Commit:
git add . --all
git commit -m "COMMIT MESSAGE"
Commit messages should describe their changes in present-tense, e.g. "
Add stuff to file.ext
" instead of "added stuff to file.ext
". Try to remove duplicate/merge commits from PRs as these clutter up history, and may make it hard to read. -
Optionally run rustfmt on the files you changed and commit again if it did anything (check with
git diff
first). -
Test your changes with
make qemu
ormake virtualbox
. -
Pull from upstream:
git fetch upstream
git rebase upstream/master
- Note: try not to use
git pull
, it is equivalent to doinggit fetch upstream; git merge master upstream/master
.
-
Repeat step 10 to make sure the rebase still builds and starts.
-
Push your changes to your fork:
git push origin MY_BRANCH
-
On Redox GitLab, create a Merge Request, following the template. Describe your changes. Submit!
-
If your merge requests is ready, send the link on Redox Merge Requests room.
Using GitLab web interface
- Open the repository that you want and click in "Web IDE".
- Make your changes on repository files and click on "Source Control" button on the left side.
- Name your commits and apply them to specific branches (each new branch will be based on the current master branch, that way you don't need to create forks and update them, just send the proper commits to the proper branches, it's recommended that each new branch is a different change, more easy to review and merge).
- After the new branch creation a pop-up window will appear suggesting to create a MR, if you are ready, click on the "Create MR" button.
- If you want to make more changes, finish them, return to the repository page and click on the "Branches" link.
- Each branch will have a "Create merge request" button, click on the branches that you want to merge.
- Name your MR and create (you can squash your commits after merge to not flood the upstream with commits)
- If your merge request is ready, send the link on Redox Merge Requests room.
- Remember that if you use forks on GitLab web interface you will need to update your forks manually in web interface (delete the fork/create a new fork if the upstream repository push commits from other contributors, if you don't do this, there's a chance that your merged commits will come in the next MR).
GitLab Issues
GitLab issues are a somewhat formal way to communicate with fellow Redox devs, but better for problems that cannot be quickly resolved. Issues are a good way to discuss specific features in detail or file bug reports, but if you want a quick response, using the chat is probably better.
If you haven't joined the chat yet, you should (if at all interested in contributing)!
Please follow the Guidelines for your issues, if applicable. You will need a Redox GitLab account. See Signing in to GitLab.
Communication
There are several ways to communicate with the Redox team. Chat is the best way, but you can connect with us through some other channels as well.
Chat
The best way to communicate with the Redox team is on Matrix Chat. You can open the Redox Space and see the rooms that are available (these rooms are English-only, we don't accept other languages because we don't understand them).
Matrix has several different clients. Element is a commonly used choice.
We follow the Rust Code Of Conduct as rules of the chat rooms.
(You must join the "Join Requests" room and request an invite to the Redox space, only space members can see all rooms)
All rooms available on the Redox space:
- #redox-join:matrix.org - a room to be invited to Redox space.
- #redox-general:matrix.org - a room for Redox-related discussions (questions, suggestions, porting, etc).
- #redox-dev:matrix.org - a room for the development, here you can talk about anything development-related (code, proposals, achievements, styling, bugs, etc).
- #redox-support:matrix.org - a room for testing/building support (problems, errors, questions).
- #redox-mrs:matrix.org - a room to send all ready merge requests without conflicts (if you have a ready MR to merge, send there).
- #redox-gitlab:matrix.org - a room to send new GitLab accounts for approval.
- #redox-board:matrix.org - a room for the Board of Directors meetings.
- #redox-voip:matrix.org
- #redox-random:matrix.org - a room for off-topic.
(We recommend that you leave the "Join Requests" room after your entry on Redox space)