- Fix for kernels which have permissions 0200 (write-only) on gpio export device.
- Updates to systemd unit files.
- Update to README for (not so) new homepage (thanks to Martin Michlmayr).
- Add a configuration option in the examples to handle QNAP devices which lack a fan (Debian bug #712841, thanks to Martin Michlmayr for the patch and to Axel Sommerfeldt).
Get it from git or http://www.hellion.org.uk/qcontrol/releases/0.5.6/.
The Debian package will be uploaded shortly.
(Note: it appears I forgot to commit/push this when 0.5.5 actually happened, nearly 20 months ago, so this posting is somewhat tardy, apologies)
- Update list of supported devices (Martin Michlmayr, via Debian bug #788911).
- Update examples to handle varying gpio-keys node name.
- Improvements to option parsing and help messages (Arnaud, see also Debian bug #804767).
Get it from git or http://www.hellion.org.uk/qcontrol/releases/0.5.5/.
The Debian package will be uploaded shortly.
Recently I started getting SMART warnings from on of the disks in my home NAS (a QNAP TS-419P II armel/kirkwood device running Debian Jessie):
Device: /dev/disk/by-id/ata-ST3000DM001-1CH166_W1F2QSV6 [SAT], Self-Test Log error count increased from 0 to 1
Meaning it was now time to switch out that disk from the RAID5 array.
Since everytime this happens I have to go and lookup again what to do I've decided to write it down this time.
I configure SMART to talk about devices by-id (giving me their name and model number) so first I needed to figure out what the kernel was calling this device (although mdadm is happy with the by-id path, various other bits are not):
# readlink /dev/disk/by-id/ata-ST3000DM001-1CH166_W1F2QSV6
../../sdd
Next I needed to mark the device as failed in the array:
# mdadm --detail /dev/md0
/dev/md0:
[...]
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
[...]
Number Major Minor RaidDevice State
5 8 48 0 active sync /dev/sdd
1 8 32 1 active sync /dev/sdc
6 8 16 2 active sync /dev/sdb
4 8 0 3 active sync /dev/sda
# mdadm --fail /dev/md0 /dev/disk/by-id/ata-ST3000DM001-1CH166_W1F2QSV6
mdadm: set /dev/disk/by-id/ata-ST3000DM001-1CH166_W1F2QSV6 faulty in /dev/md0
# mdadm --detail /dev/md0
/dev/md0:
[...]
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0
[...]
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 32 1 active sync /dev/sdc
6 8 16 2 active sync /dev/sdb
4 8 0 3 active sync /dev/sda
5 8 48 - faulty /dev/sdd
If it had been the RAID subsystem rather than SMART monitoring which had first spotted the issue then this would have happened already (and I would had received a different mail from the RAID checks instead of SMART).
Once the disk is marked as failed then actually remove it from the array:
# mdadm --remove /dev/md0 /dev/disk/by-id/ata-ST3000DM001-1CH166_W1F2QSV6
mdadm: hot removed /dev/disk/by-id/ata-ST3000DM001-1CH166_W1F2QSV6 from /dev/md0
And finally tell the kernel to delete the device:
# echo 1 > /sys/block/sdd/device/delete
At this point I can physically swap the disks.
At this point I noticed there were some interesting messages in dmesg, either from the echo to the delete node in sysfs or from the physical switch of the disks:
[1779238.656459] md: unbind<sdd>
[1779238.659455] md: export_rdev(sdd)
[1779258.686720] sd 3:0:0:0: [sdd] Synchronizing SCSI cache
[1779258.700507] sd 3:0:0:0: [sdd] Stopping disk
[1779259.377589] ata4.00: disabled
[1779371.126202] ata4: exception Emask 0x10 SAct 0x0 SErr 0x180000 action 0x6 frozen
[1779371.133740] ata4: edma_err_cause=00000020 pp_flags=00000000, SError=00180000
[1779371.141003] ata4: SError: { 10B8B Dispar }
[1779371.145309] ata4: hard resetting link
[1779371.468708] ata4: SATA link down (SStatus 0 SControl 300)
[1779371.474340] ata4: EH complete
[1779557.416735] ata4: exception Emask 0x10 SAct 0x0 SErr 0x4010000 action 0xe frozen
[1779557.424356] ata4: edma_err_cause=00000010 pp_flags=00000000, dev connect
[1779557.431264] ata4: SError: { PHYRdyChg DevExch }
[1779557.436008] ata4: hard resetting link
[1779563.357089] ata4: link is slow to respond, please be patient (ready=0)
[1779567.449096] ata4: SRST failed (errno=-16)
[1779567.453316] ata4: hard resetting link
I wonder if I should have used another method to detach the disk, perhaps poking the controller rather than the disk (which rang a vague bell in my memory from last time this happened) but in the end the disk is broken and the kernel seems to have coped so I'm not too worried about it.
It looked like the new disk had already been recognised:
[1779572.593471] scsi 3:0:0:0: Direct-Access ATA HGST HDN724040AL A5E0 PQ: 0 ANSI: 5
[1779572.604187] sd 3:0:0:0: [sdd] 7814037168 512-byte logical blocks: (4.00 TB/3.63 TiB)
[1779572.612171] sd 3:0:0:0: [sdd] 4096-byte physical blocks
[1779572.618252] sd 3:0:0:0: Attached scsi generic sg3 type 0
[1779572.626754] sd 3:0:0:0: [sdd] Write Protect is off
[1779572.631771] sd 3:0:0:0: [sdd] Mode Sense: 00 3a 00 00
[1779572.632588] sd 3:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[1779572.665609] sdd: unknown partition table
[1779572.671522] sd 3:0:0:0: [sdd] Attached SCSI disk
[1779855.362331] sdd: unknown partition table
So I skipped trying to figure out how to perform a SCSI rescan and went straight to identifying that the new disk was called:
/dev/disk/by-id/ata-HGST_HDN724040ALE640_PK1338P4GY8ENB
and then tried to do a SMART conveyancing self-test with:
# smartctl -t conveyance /dev/disk/by-id/ata-HGST_HDN724040ALE640_PK1338P4GY8ENB
But this particular drive seems to not support that, so I went straight to editing /etc/smartd.conf to replace the old disk with the new one and:
# service smartmontools reload
With all that I was ready to add the new disk to the array:
# mdadm --add /dev/md0 /dev/disk/by-id/ata-HGST_HDN724040ALE640_PK1338P4GY8ENB
mdadm: added /dev/disk/by-id/ata-HGST_HDN724040ALE640_PK1338P4GY8ENB
# mdadm --detail /dev/md0
/dev/md0:
[...]
State : clean, degraded, recovering
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
[...]
Rebuild Status : 0% complete
[...]
Number Major Minor RaidDevice State
5 8 48 0 spare rebuilding /dev/sdd
1 8 32 1 active sync /dev/sdc
6 8 16 2 active sync /dev/sdb
4 8 0 3 active sync /dev/sda
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd[5] sda[4] sdb[6] sdc[1]
5860538880 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [_UUU]
[>....................] recovery = 0.0% (364032/1953512960) finish=1162.4min speed=28002K/sec
So now all that was left was to wait about 20 hours (with fingers crossed a second disk didn't die! spoiler: it didn't)
I am currently attending the mini-Debconf being held in space generously provided by ARM's offices in Cambridge, UK. Thanks to ARM and the other sponsors for making this possible.
Yesterday I made a pass through the bug list for the Xen packages. According to the replies I have received from the BTS I looked at and acted on:
- #797205: Tagged to reflect that I had previously forwarded upstream.
- #753358: Update the found versions and marked as an upstream issue.
- #798510: Investigated a bit and asked some followup questions to the submitter.
- #799122: Asked for some clarifications from the submitter and updated the found versions. Will likely followup on this one some more today.
- #745419: Sent a fix to upstream.
- #784011, #770230, #776319: Various CVEs closed as fixed by 4.5.1~rc1-1
- #795721, #784880: Various CVEs closed as fixed by 4.6.0-1 (currently in NEW).
- #793132, #785187: Regular bugs fixed by 4.6.0-1.
- #716496: Closed (wontfix as far as Debian is concerned)
- #665433: Closed, apparently unreproducible crash in the Squeeze version.
- #439156, #441539, #399073:
Closed bugs against some truly ancient versions of Xen (Etch or Lenny?). These
got lost when newer versions of Xen were uploaded since the packages have the
Xen major.minor versions in the name. I previously opened #796370
to add a reportbug script to cause such bugs to be filed against
src:xen
in the future and prevent this happening. I hope to see that patch in a future version of the package. - #776742: Tagged as an upstream issue.
- #491793: Marked as blocked by #481542.
Phew! Today I expect more of the same, starting with seeing where the new information in #799122 takes me.
Since gitorious' shutdown I decided it was time to start hosting my own git repositories for my own little projects (although the company which took over gitorious has a Free software offering it seems that their hosted offering is based on the proprietary version, and in any case once bitten, twice shy and all that).
After a bit of investigation I settled on using gitolite and gitweb. I did consider (and even had a vague preference for) cgit but it wasn't available in Wheezy (even backports, and the backport looked tricky) and I haven't upgraded my VPS yet. I may reconsider cgit this once I switch to Jessie.
The only wrinkle was that my VPS is shared with a friend and I didn't
want to completely take over the gitolite
and gitweb
namespaces in
case he ever wanted to setup git.hisdomain.com
, so I needed
something which was at least somewhat compatible with
vhosting. gitolite
doesn't appear to support such things out of the
box but I found an interesting/useful
post
from Julius Plenz which included sufficient inspiration that I thought
I knew what to do.
After a bit of trial and error here is what I ended up with:
Install gitolite
The gitolite website has
plenty of documentation on configuring gitolite. But since gitolite
is
in Debian its even more trivial than even the quick install makes out.
I decided to use the newer gitolite3 package from
wheezy-backports
instead of the gitolite (v2) package
from Wheezy. I already had backports enabled so this was just:
# apt-get install gitolite3/wheezy-backports
I accepted the defaults and gave it the public half of the ssh key
which I had created to be used as the gitolite
admin key.
By default this added a user gitolite3
with a home directory of
/var/lib/gitolite3
. Since they username forms part of the URL used
to access the repositories I want it to include the 3, so I edited
/etc/passwd
, /etc/groups
, /etc/shadow
and /etc/gshadow
to say
just gitolite
but leaving the home directory as gitolite3
.
Now I could clone the gitolite-admin
repo and begin to configure
things.
Add my user
This was simple as dropping the public half into the gitolite-admin
repo as keydir/ijc.pub
, then git
add
, commit
and push
.
Setup vhosting
Between the gitolite docs and Julius' blog post I had a pretty good idea what I wanted to do here.
I wasn't too worried about making the vhost transparent from the
developer's (ssh:// URL) point of view, just from the gitweb
and
git clone
side. So I decided to adapt things to use a simple
$VHOST/$REPO.git
schema.
I created /var/lib/gitolite3/local/lib/Gitolite/Triggers/VHost.pm
containing:
package Gitolite::Triggers::VHost;
use strict;
use warnings;
use File::Slurp qw(read_file write_file);
sub post_compile {
my %vhost = ();
my @projlist = read_file("$ENV{HOME}/projects.list");
for my $proj (sort @projlist) {
$proj =~ m,^([^/\.]*\.[^/]*)/(.*)$, or next;
my ($host, $repo) = ($1,$2);
$vhost{$host} //= [];
push @{$vhost{$host}} => $repo;
}
for my $v (keys %vhost) {
write_file("$ENV{HOME}/projects.$v.list",
{ atomic => 1 }, join("\n",@{$vhost{$v}}));
}
}
1;
I then edited /var/lib/gitolite3/.gitolite.rc
and ensured it
contained:
LOCAL_CODE => "$ENV{HOME}/local",
POST_COMPILE => [ 'VHost::post_compile', ],
(The first I had to uncomment, the second to add).
All this trigger does is take the global projects.list
, in which
gitolite
will list any repo which is configured to be accessible via
gitweb
, and split it into several vhost specific lists.
Create first repository
Now that the basics were in place I could create my first repository (for hosting qcontrol).
In the gitolite-admin
repository I edited conf/gitolite.conf
and added:
repo hellion.org.uk/qcontrol
RW+ = ijc
After adding, committing and pushing I now have "/var/lib/gitolite3/projects.list" containing:
hellion.org.uk/qcontrol.git
testing.git
(the testing.git
repository is configured by default) and
/var/lib/gitolite3/projects.hellion.org.uk.list
containing just:
qcontrol.git
For cloning the URL is:
gitolite@${VPSNAME}:hellion.org.uk/qcontrol.git
which is rather verbose (${VPSNAME}
is quote long in my case too),
so to simplify things I added to my .ssh/config
:
Host gitolite
Hostname ${VPSNAME}
User gitolite
IdentityFile ~/.ssh/id_rsa_gitolite
so I can instead use:
gitolite:hellion.org.uk/qcontrol.git
which is a bit less of a mouthful and almost readable.
Configure gitweb (http:// URL browsing)
Following the
documentation's
advice I edited /var/lib/gitolite3/.gitolite.rc
to set:
UMASK => 0027,
and then:
$ chmod -R g+rX /var/lib/gitolite3/repositories/*
Which arranges that members of the gitolite
group can read anything
under /var/lib/gitolite3/repositories/*
.
Then:
# adduser www-data gitolite
This adds the user www-data
to the gitolite
group so it can take
advantage of those relaxed permissions. I'm not super happy about this
but since gitweb runs as www-data:www-data
this seems to be the
recommended way of doing things. I'm consoling myself with the fact
that I don't plan on hosting anything sensitive... I also arranged
things such that members of the groups can only list the contents of
directories from the vhost directory down by setting g=x
not g=rx
on higher level directories. Potentially sensitive files do not have group
permissions at all either.
Next I created /etc/apache2/gitolite-gitweb.conf
:
die unless $ENV{GIT_PROJECT_ROOT};
$ENV{GIT_PROJECT_ROOT} =~ m,^.*/([^/]+)$,;
our $gitolite_vhost = $1;
our $projectroot = $ENV{GIT_PROJECT_ROOT};
our $projects_list = "/var/lib/gitolite3/projects.${gitolite_vhost}.list";
our @git_base_url_list = ("http://git.${gitolite_vhost}");
This extracts the vhost name from ${GIT_PROJECT_ROOT}
(it must be
the last element) and uses it to select the appropriate vhost specific
projects.list
.
Then I added a new vhost to my apache2 configuration:
<VirtualHost 212.110.190.137:80 [2001:41c8:1:628a::89]:80>
ServerName git.hellion.org.uk
SetEnv GIT_PROJECT_ROOT /var/lib/gitolite3/repositories/hellion.org.uk
SetEnv GITWEB_CONFIG /etc/apache2/gitolite-gitweb.conf
Alias /static /usr/share/gitweb/static
ScriptAlias / /usr/share/gitweb/gitweb.cgi/
</VirtualHost>
This configures git.hellion.org.uk
(don't forget to update DNS too)
and sets the appropriate environment variables to find the custom
gitolite-gitweb.conf
and the project root.
Next I edited /var/lib/gitolite3/.gitolite.rc
again to set:
GIT_CONFIG_KEYS => 'gitweb\.(owner|description|category)',
Now I can edit the repo configuration to be:
repo hellion.org.uk/qcontrol
owner = Ian Campbell
desc = qcontrol
RW+ = ijc
R = gitweb
That R
permission for the gitweb
pseudo-user causes the repo to be
listed in the global projects.list
and the trigger which we've added
causes it to be listed in projects.hellion.org.uk.list
, which is
where our custom gitolite-gitweb.conf
will look.
Setting GIT_CONFIG_KEYS
allows those options (owner
and desc
are
syntactic sugar for two of them) to be set here and propagated to the
actual repo.
Configure git-http-backend (http:// URL cloning)
After all that this was pretty simple. I just added this to my vhost
before the ScriptAlias / /usr/share/gitweb/gitweb.cgi/
line:
ScriptAliasMatch \
"(?x)^/(.*/(HEAD | \
info/refs | \
objects/(info/[^/]+ | \
[0-9a-f]{2}/[0-9a-f]{38} | \
pack/pack-[0-9a-f]{40}\.(pack|idx)) | \
git-(upload|receive)-pack))$" \
/usr/lib/git-core/git-http-backend/$1
This (which I stole straight from the git-http-backend(1)
manpage
causes anything which git-http-backend
should deal with to be sent
there and everything else to be sent to gitweb
.
Having done that access is enabled by editing the repo configuration one last time to be:
repo hellion.org.uk/qcontrol
owner = Ian Campbell
desc = qcontrol
RW+ = ijc
R = gitweb daemon
Adding R
permissions for daemon
causes gitolite
to drop a stamp
file in the repository which tells git-http-backend
that it should
export it.
Configure git daemon (git:// URL cloning)
I actually didn't bother with this, git http-backend
supports the
smart HTTP mode which should be as efficient as the git
protocol. Given
that I couldn't see any reason to run another network facing daemon on
my VPS.
FWIW it looks like vhosting could have been achieved by using the
--interpolated-path
option.
Conclusion
There's quite a few moving parts, but they all seems to fit together
quite nicely. In the end apart from adding www-data
to the
gitolite
group I'm pretty happy with how things ended up.
Since gitorious has now shutdown I've (finally!) moved the qcontrol homepage to: http://www.hellion.org.uk/qcontrol.
Source can now be found at http://git.hellion.org.uk/qcontrol.git.
I recently wrote a blog post on using grub 2 as a Xen PV bootloader for work. See Using Grub 2 as a bootloader for Xen PV guests over on https://blog.xenproject.org.
Rather than repeat the whole thing here I'll just briefly cover the stuff which is of interest for Debian users (if you want all full background and the stuff on building grub from source etc then see the original post).
TL;DR: With Jessie, install grub-xen-host in your domain 0 and grub-xen in your PV guests then in your guest configuration, depending on whether you want a 32- or 64-bit PV guest write either:
kernel = "/usr/lib/grub-xen/grub-i386-xen.bin"
or
kernel = "/usr/lib/grub-xen/grub-x86_64-xen.bin"
(instead of bootloader = ...
or other kernel = ...
, also omit
ramdisk = ...
and any command line related stuff (e.g. root = ...
,
extra = ...
, cmdline = ...
) and your guests will boot using Grub
2, much like on native.
In slightly more detail:
The forthcoming Debian 8.0 (Jessie) release will contain support
for both host and guest pvgrub2
. This was added in version
2.02~beta2-17
of the package (bits were present before then, but
-17
ties it all together).
The package grub-xen-host
contains grub binaries
configured for the host, these will attempt to chainload an in-guest
grub image (following the Xen x86 PV Bootloader Protocol) and fall back to
searching for a grub.cfg in the guest filesystems. grub-xen-host
is
Recommended
by the Xen meta-packages in Debian or can be installed
by hand.
The package grub-xen-bin
contains the grub binaries
for both the i386-xen
and x86_64-xen
platforms, while the
grub-xen
package integrates this into the running system
by providing the actual pvgrub2 image (i.e. running grub-install
at
the appropriate times to create an image tailored to the system) and
integration with the kernel packages (i.e. running update-grub
at
the right times), so it is the grub-xen
which should be installed in
Debian guests.
At this time the grub-xen
package is not installed in a guest
automatically so it will need to be done manually (something which
perhaps could be addressed for Stretch).
After becoming a DM at Debconf12 in Managua, Nicaragua and entering the NM queue during Debconf13 in Vaumarcus, Switzerland I received the mail about 24 hours too late to officially become a DD during Debconf14 in Portland, USA. Nevertheless it was a very pleasant surprise to find the mail in my INBOX this morning confirming that my account had been created and that I was officially ijc@debian.org. Thanks to everyone who helped/encouraged me along the way!
I don't imagine much will change in practice, I intend to remain involved in the kernel and Debian Installer efforts as well as continuing to contribute to the Xen packaging and to maintain qcontrol (both in Debian and upstream) and sunxi-tools. I suppose I also still maintain ivtv-utils and xserver-xorg-video-ivtv but they require so little in the way of updates that I'm not sure they count.
It's taken a while but all of the pieces are finally in place to run successfully through Debian Installer on ARM64 using the Debian ARM64 port.
So I'm now running nightly builds locally and uploading them to http://www.hellion.org.uk/debian/didaily/arm64/.
If you have CACert in your CA roots then you might prefer the slightly more secure version.
Hopefully before too long I can arrange to have them building on one of the project machines and uploaded to somewhere a little more formal like people.d.o or even the regular Debian Installer dailies site. This will have to do for now though.
Warning
The arm64 port is currently hosted on Debian Ports which only supports the unstable "sid" distribution. This means that installation can be a bit of a moving target and sometimes fails to download various installer components or installation packages. Mostly it's just a case of waiting for the buildd and/or archive to catch up. You have been warned!
Installing in a Xen guest
If you are lucky enough to have access to some 64-bit ARM hardware (such as the APM X-Gene, see wiki.xen.org for setup instructions) then installing Debian as a guest is pretty straightforward.
I suppose if you had lots of time (and I do mean lots) you could also install under Xen running on the Foundation or Fast Model. I wouldn't recommend it though.
First download the installer
kernel
and
ramdisk
onto your dom0 filesystem (e.g. to /root/didaily/arm64
).
Second create a suitable guest config file such as:
name = "debian-installer"
disk = ["phy:/dev/LVM/debian,xvda,rw"]
vif = [ '' ]
memory = 512
kernel = "/root/didaily/arm64/vmlinuz"
ramdisk= "/root/didaily/arm64/initrd.gz"
extra = "console=hvc0 -- "
In this example I'm installing to a raw logical volume
/dev/LVM/debian
. You might also want to use
randmac to generate a
permanent MAC address for the Ethernet device (specified as
vif = ['mac=xx:xx:xx:xx:xx:xx']
).
Once that is done you can start the guest with:
xl create -c cfg
From here you'll be in the installer and things carry on as
usual. You'll need to manually point it to ftp.debian-ports.org
as
the mirror, or you can preseed by appending to the extra
line in the
cfg like so:
mirror/country=manual mirror/http/hostname=ftp.debian-ports.org mirror/http/directory=/debian
Apart from that there will be a warning about not knowing how to setup the bootloader but that is normal for now.
Installing in Qemu
To do this you will need a version of http://www.qemu.org
which supports qemu-system-aarch64
. The latest release doesn't yet
so I've been using
v2.1.0-rc3
(it seems upstream are now up to -rc5). Once qemu is built and
installed and the installer
kernel
and
ramdisk
have been downloaded to $DI
you can start with:
qemu-system-aarch64 -M virt -cpu cortex-a57 \
-kernel $DI/vmlinuz -initrd $DI/initrd.gz \
-append "console=ttyAMA0 -- " \
-serial stdio -nographic --monitor none \
-drive file=rootfs.qcow2,if=none,id=blk,format=qcow2 -device virtio-blk-device,drive=blk \
-net user,vlan=0 -device virtio-net-device,vlan=0
That's using a qcow2 image for the rootfs, I think I created it with something like:
qemu-img create -f qcow2 rootfs.qcow2 4G
Once started installation proceeds much like normal. As with Xen you
will need to either point it at the debian-ports archive by hand or
preseed by adding to the -append
line and the warning about no
bootloader configuration is expected.
Installing on real hardware
Someone should probably try this ;-).
I've recently packaged the sunxi tools for Debian. These are a set of tools produce by the Linux Sunxi project for working with the Allwinner "sunxi" family of processors. See the package page for details. Thanks to Steve McIntyre for sponsoring the initial upload.
The most interesting component of the package are the tools for working with the Allwinner processors' FEL mode. This is a low-level processor mode which implements a simple USB protocol allowing for initial programming of the device and recovery which can be entered on boot (usually be pressing a special 'FEL button' somewhere on the device). It is thanks to FEL mode that most sunxi based devices are pretty much unbrickable.
The most common use of FEL is to boot over
USB. In the Debian package the
fel
and usb-boot
tools are named sunxi-fel
and sunxi-usb-boot
respectively but otherwise can be used in the normal way described on
the sunxi wiki pages.
One enhancement I made to the Debian version of usb-boot
is to
integrate with the u-boot packages to allow you to easily
FEL boot any sunxi platform supported by the Debian packaged version
of u-boot (currently only Cubietruck, more to come I hope). To make
this work we take advantage of Multiarch to install the
armhf version of u-boot (unless
your host is already armhf of course, in which case just install the
u-boot package):
# dpkg --add-architecture armhf
# apt-get update
# apt-get install u-boot:armhf
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
u-boot:armhf
0 upgraded, 1 newly installed, 0 to remove and 1960 not upgraded.
Need to get 0 B/546 kB of archives.
After this operation, 8,676 kB of additional disk space will be used.
Retrieving bug reports... Done
Parsing Found/Fixed information... Done
Selecting previously unselected package u-boot:armhf.
(Reading database ... 309234 files and directories currently installed.)
Preparing to unpack .../u-boot_2014.04+dfsg1-1_armhf.deb ...
Unpacking u-boot:armhf (2014.04+dfsg1-1) ...
Setting up u-boot:armhf (2014.04+dfsg1-1) ...
With that done FEL booting a cubietruck is as simple as starting the board in FEL mode (by holding down the FEL button when powering on) and then:
# sunxi-usb-boot Cubietruck -
fel write 0x2000 /usr/lib/u-boot/Cubietruck_FEL/u-boot-spl.bin
fel exe 0x2000
fel write 0x4a000000 /usr/lib/u-boot/Cubietruck_FEL/u-boot.bin
fel write 0x41000000 /usr/share/sunxi-tools//ramboot.scr
fel exe 0x4a000000
Which should result in something like this on the Cubietruck's serial console:
U-Boot SPL 2014.04 (Jun 16 2014 - 05:31:24)
DRAM: 2048 MiB
U-Boot 2014.04 (Jun 16 2014 - 05:30:47) Allwinner Technology
CPU: Allwinner A20 (SUN7I)
DRAM: 2 GiB
MMC: SUNXI SD/MMC: 0
In: serial
Out: serial
Err: serial
SCSI: SUNXI SCSI INIT
Target spinup took 0 ms.
AHCI 0001.0100 32 slots 1 ports 3 Gbps 0x1 impl SATA mode
flags: ncq stag pm led clo only pmp pio slum part ccc apst
Net: dwmac.1c50000
Hit any key to stop autoboot: 0
sun7i#
As more platforms become supported by the u-boot packages you should
be able to find them in /usr/lib/u-boot/*_FEL
.
There is one minor inconvenience which is the need to run
sunxi-usb-boot
as root in order to access the FEL USB device. This
is easily resolved by creating /etc/udev/rules.d/sunxi-fel.rules
containing either:
SUBSYSTEMS=="usb", ATTR{idVendor}=="1f3a", ATTR{idProduct}=="efe8", OWNER="myuser"
or
SUBSYSTEMS=="usb", ATTR{idVendor}=="1f3a", ATTR{idProduct}=="efe8", GROUP="mygroup"
To enable access for myuser
or mygroup
respectively. Once you have
created the rules file then to enable:
# udevadm control --reload-rules
As well as the FEL mode tools the packages also contain a FEX (de)compiler. FEX is Allwinner's own hardware description language and is used with their Android SDK kernels and the fork of that kernel maintained by the linux-sunxi project. Debian's kernels follow mainline and therefore use Device Tree.
This blog is powered by ikiwiki.