ELDK Release Build Environment
Author
This document was written by Wolfgang Denk (wd {at} denx {dot} de).
Introduction
In the past, only a small number of people tried building the
ELDK from
scratch, even though the required steps are pretty well documented in section
"Rebuilding the ELDK from Scratch"
of the DULG (DENX U-Boot and Linux Guide).
The major complexity not mentioned in the DULG is the required build
host environment. So far, only two build environments are well
supported and in regular use:
- Red Hat Linux release 7.3 (Valhalla)
- Fedora Core release 5 (Bordeaux)
For maximum compatibility of the resulting
ELDK tools even with older
Linux distributions, Red Hat Linux Linux release 7.3 has been used for
production (release) builds.
But such a system is not as trivial to set up as it might seem. For
example, the old 2.4.20 Linux kernel that was used in RH 7.3 knows
not much about S-ATA or PCIe or many other features found on current
mainboards, so installation from the original media will fail on most current boards.
But while compatible real hardware is more and more difficult to find, it has
become more and more easy to use simulated hardware instead.
Here we present a solution to run the
ELDK build environment under any
somewhat recent Linux distribution using the QEMU emulator.
The way how we use
qemu
here is pretty much standard, except
for the following measures that are intended to improve
usability and performance:
- Instead of using plain files as
qemu
file system images, we use dedicated
disk partitions or LVM volumes.
This is supposed to improve disk I/O performance,
but requires root permissions to start qemu
.
- We use a special
network startup script
with a little proxy arp trickery (kudos to Detlev Zundel for
that!) that allows the virtual machine to be seen transparently
on the network like a real host.
This way you can for example login to the virtual machine
using
ssh
, or even use NFS within the virtual machine.
- We use
"qemu-kvm"
for maximum performance.

Note: this document assumes a Fedora 10 host environment. Other
Linux distributions most likely use slightly different package and
script names, but the general operation should be everywhere about
the same. If you are using another distribution, it would be highly
appreciated if you could help and fill in the instructions for your
specific system.
Glossary
The following text uses these abbreviations:
- VBH:
- virtual build host
Install the necessary tools
- Add support for the
"rpmfusion"
repositories:
$ sudo rpm -ihv
http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-stable.noarch.rpm
\
> http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-stable.noarch.rpm
- Install
"qemu"
and "qemu-kvm"
:
$ sudo yum -y install kvm qemu akmod-kqemu kmod-kqemu-`uname -r`
- For easy use, install some more helpers:
$ sudo yum -y install libvirt virt-viewer virt-manager virt-top
- Test - load KVM modules:
$ sudo /etc/sysconfig/modules/kvm.modules
[This boils down to "modprobe kvm-intel"
in Intel and
to "modprobe kvm-amd"
in AMD processor based host systems.]
Initial set up of the emulated RH-7.3 environment
This section describes how you can set up the root file system image
for the VBH starting from a tarball taken from an existing, real system.
Note: You don't have to actually do this normally;
a more convenient way using a pre-built image will be presented
below.
- Create hard disk image:
-> dd if=/dev/zero of=root-rh-7.3.img bs=10240k count=1024
- Boot RH-7.3 installation disk in Rescue Mode
and partition disk and format root file system.
This is necessary because the old RH-7.3 Linux tools
cannot mount a file system created with more
recent versions of
mkfs
.
Create a 8 GiB root partition on /dev/hda1
and a 2 GiB swap space on /dev/hda2
:
-> qemu -m 768 -hda root-rh-7.3.img -cdrom /tmp/valhalla-i386-disc1.iso -boot d
...
boot: linux rescue
...
sh-2.05a# fdisk /dev/hda
...
=> n => p => 1 => 1 => +8192M
=> a => 1
=> n => p => 2 => ENTER => ENTER
=> t => 2 => 82
=> p
=> w
sh-2.05a# mke2fs -j /dev/hda1
sh-2.05a# mkswap /dev/hda2
- Boot Knoppix to have all tools (networking, ssh)
to initialize the root file system from backup tarball:
-> qemu -m 768 -hda root-rh-7.3.img -cdrom /tmp/KNOPPIX_V5.1.1CD-2007-01-04-EN.iso -boot d
...
$ sudo -i
# mkdir /tmp/mnt
# mount -o rw /dev/hda1 /tmp/mnt
# cd /tmp/mnt
# ssh wd@gemini 'gunzip </tmp/root.build.tar.gz' | tar xpf -
- Adjust file system content:
remove obsolete entries from
/etc/fstab
and /etc/lilo.conf
; re-install lilo
# chroot /tmp/mnt /bin/sh
# vi /etc/fstab
# vi /etc/lilo.conf
# mount /proc
# lilo
# umount /proc
- Unmount disk image and reboot
# cd /
# umount /mnt/tmp
- Boot RH-7.3 System:
-> qemu -m 768 -hda root-rh-7.3.img -boot c
The resulting, ready to use disk image, has been placed on our ftp
server: see
ftp://ftp.denx.de/pub/eldk/build-env/root-rh-7.3.img.gz
Note: This root file system image uses the following user / password
combinations:
Setting up ELDK Build Environment
- Download the disk image for the root file system:
$ wget ftp://ftp.denx.de/pub/eldk/build-env/root-rh-7.3.img.gz
- Create a logical volume.
The following example assumes that the volume group "data" has sufficient
free space:
# lvcreate -L 96G -n eldk_build data
Logical volume "eldk_build" created
- Boot the build system,
using the root file system image as
dev/hda
and
the newly created logical volume as dev/hdc
:
# qemu-kvm -m 1024 -smp 2 \
-net nic,model=rtl8139 -net tap,script=ifup-build \
-hda eldk-build.img -hdc /dev/mapper/data-eldk_build
- Login as user "root"
- Create a file system on the logical volume:
# mke2fs -j /dev/hdc
mke2fs 1.27 (8-Mar-2002)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
12582912 inodes, 25165824 blocks
1258291 blocks (5.00%) reserved for the super user
First data block=0
768 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632,
2654208,
4096000, 7962624, 11239424, 20480000, 23887872
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 36 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
- Edit
"/etc/fstab"
and enable the mounting of "/dev/hdc"
:
Remove the comment and change this line:
#/dev/hdc /opt ext3 defaults 1 2
into
/dev/hdc /opt ext3 defaults 1 2
- Then mount the logical volume on top of the
"/opt"
directory
and make sure it gets exported over NFS:
# mount /opt
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/hda1 4.9G 3.3G 1.4G 70% /
none 505M 0 504M 0% /dev/shm
/dev/hdc 94G 33M 89G 1% /opt
# exportfs -rav
exporting 10.0.0.0/255.0.0.0:/tftpboot
exporting 10.0.0.0/255.0.0.0:/tmp
exporting 10.0.0.0/255.0.0.0:/opt
exporting 10.0.0.0/255.0.0.0:/
- Create a working directory for the ELDK build steps.
We assign pretty open access rights here to make sharing of data easier:
# mkdir /opt/eldk
# chmod 01777 /opt/eldk
- Create a user so you can easily share data (for example over NFS)
between your machines and the VBH:
# useradd -c 'Your Full Name' yourid
- Eventually create $HOME directory for "builder" user:
or adjust
"/etc/passwd"
as wanted:
# mkdir -p /opt/eldkbuild/home
# chown builder.builder /opt/eldkbuild/home
- On your development host (the real one, not the VBH)
or any other machine in your net you should now be able to access the
VBH.
If you have the automounter up and running, you can now simply type:
$ cd /net/eldk-build/opt/eldk/
Alternatively, you can mount the VBH's file system explicitely over NFS:
$ sudo mount -t nfs eldk-build:/opt/eldk/ /mnt/eldk/
$ cd /mnt/eldk/
- Now clone the three ELDK git repositories. The reason for doing this from
your development host is that we don't have git installed in the old RH
7.3 based environment of the VBH:
$ git-clone git://git.denx.de/eldk/build.git build
$ git-clone git://git.denx.de/eldk/tarballs.git tarballs
$ git-clone git://git.denx.de/eldk/SRPMS.git SRPMS
- Log in on the target system as user "builder" (see
here for the password settings):
$ ssh builder@eldk-build
- Change into the
"/opt/eldk/build"
directory and start a build,
for example for the ARM architecture:
$ cd /opt/eldk/build
$ ./ELDK_BUILD -a arm 2>&1 | tee $(date "+arm-%Y-%m-%d.LOG")
## Build of arm-2009-01-05 starting at Mon Jan 5 22:02:16 MET 2009
## Copy build files
...
Then lean back and watchen the blinkenlights
Startup Scripts
We provide some simple scripts to make starting the VBH easier here:
Note: Please make sure to adapt the scripts to the file system image
%resp. partition names and network parameters that match your system setup.