On souhaite backporter un package de testing vers la stable. En l'occurrence, pour notre exemple on prendra le package python-numpy pour illustrer les manipulations.
Forger un paquet debian, peut parfois prendre des allures de champs de bataille, surtout lorsque le paquet en question a un certain nombre de dépendance. Pour cloisonner le travail, je vous propose d'utiliser deboostrap.
On créer un environnement ::
apt-get install debootstrap debootstrap --arch amd64 wheezy ~/dbs-builddeb http://ftp.fr.debian.org/debian/
On se chroot dans le debootstrap ::
chroot dbs-builddeb
Nous allons avoir besoin de quelques outils de développement, que nous installons ::
apt-get install devscripts build-essential dh-buildinfo echo "export LANG=C" >> ~/.bashrc
On configure apt dans /etc/apt/source.list, tel que ::
## Wheezy deb http://ftp.fr.debian.org/debian wheezy main deb-src http://ftp.fr.debian.org/debian wheezy main # wheezy-backports deb http://ftp.fr.debian.org/debian wheezy-backports main contrib non-free ## Jessie #deb http://ftp.fr.debian.org/debian jessie main deb-src http://ftp.fr.debian.org/debian jessie main
On update le tout
apt-get update
On récupère les sources
apt-get source python-numpy
On récupère les dépendances, que l'on installe
apt-get build-dep python-numpy
On compile le code source
cd python-numpy-1.8.1 dch -i
python-numpy (1:1.8.1-1~etalabbpo70+1) unstable; urgency=low * Non-maintainer upload. * Backport to wheezy. -- Felix Defrance <felix.defrance@data.gouv.fr> Thu, 10 Apr 2014 14:22:32 +0000
dpkg-buildpackage -tc
C'est terminé ! On peut voir le package forgé dans le répertoire parent.
python-numpy_1.8.1-1~etalabbpo70+1.debian.tar.gz python-numpy_1.8.1-1~etalabbpo70+1_amd64.deb python-numpy_1.8.1-1~etalabbpo70+1.dsc python-numpy_1.8.1-1~etalabbpo70+1_amd64.changes
Pour une utilisation personnelle un dpkg -i suffira, sinon on ajoutera le package à un depot spécifiquement établi pour l'occasion par exemple..
Just like the schedule said, GNOME 3.12 was released today, and of course it's our best release ever — honest, you can really feel the whole GNOME 3 experience maturity. I've been quite busy in other projects in the recent months, and couldn't participate as much as I wanted but I nevertheless have a few perspectives to share, and people to thank.
Foremost the release team, from that point of view, the landing was particularly soft, with very few freeze break requests which is a good sign. Hat tip to Matthias for the handling of .0, and all the blog posts he has been writing detailing the changes.
For the French translation team, where my part is quite small — mostly I attended Le Translathon and provided a few screenshots for the release notes — this also looks like a nice release, especially as new participants joined the team.
Last but not least the documentation team really kicked ass this cycle.
This is just three teams, they're parts of a big project, so I couldn't end it without thanking all other teams and persons, from developers to testers, from designers to users, from the foundation board to the engagement team, GNOME is the sum of us all.
Let's celebrate.
It's already the last day of the winter documentation hackfest in Norwich (pronounced like Porridge), tomorrow we'll drive to Brussels, for FOSDEM, and here comes a second report of my activities.
On Tuesday, after the work on git stable updates (see last post), I concentrated on various speed improvements, including a small change to our own local configuration that makes wonder (it had an hack to use XSL files from a local yelp-xsl copy but that broke some timestamping, and caused some modules to be rebuilt endlessly). In normal operation a full build of help.gnome.org is now about ten minutes.
Kat had made a request to have application icons displayed in the index pages, as they are now included in Mallard documentation titles. I started that on Wednesday and it went easier than expected, the pages indeed look nicer now.
The other important part of Wednesday was a request from Petr, to get the getting started pages integrated on the web site. The particular thing about the gnome-getting-started-docs module is that it installs pages to an existing document (gnome-help), making use of Mallard generated indexes and links to provide an integrated document. Unfortunately that operation mode didn't go well with the code, as it handled tarballs one after the other and was rather confused when another document with the same name, but no index page, came in. It required quite a lot of changes, and I'm not happy about all of them as there's quite a bit of code duplication and some hardcoded parts, but at the end of the day it was working, and you can now go and view the Getting Started material on the web site.
For the last day I switched to the developer docs, and as I looked at Allan's notes and thought about a way forward, I went back to the code and discovered I added the possibility to import documentation from wiki pages almost three years ago, during the 3.0 hackfest in Bangalore... It seemed like a good fit for the serie of "How Do I" pages mostly created by Ryan and Matthias so I refreshed the code and voila! the pages got on the Guides page.
During the last year or so many elements were removed from the frontpage, first the platform grid, then the "10 minutes tutorial" carousel, but that left the page quite empty. To wrap up the week, I have now used that extra space to provide direct access to more of the internal indexes.
And that's what I did during the hackfest. I already gave thanks but here they are again, Kat & Dave, the UEA, the foundation, the participants and visitors.
This is hackfest week, it's been a long time. I arrived in Norwich Saturday evening, after almost three hours in London Liverpool Street Station looking at trains being announced delayed, then cancelled, one after the other. Storms, trees, and powerlines do not mix well.
As there's FOSDEM next weekend, the hackfest was set to start on the Sunday, and it was well spent, triaging and fixing developer.gnome.org and help.gnome.org bugs, I forgot to take note of the number of bugs when I started, but each module got down to below 20. And what's especially nice is that many of the bugs I reassigned to other modules quickly got fixed (Dave at the hackfest handled them for gnome-devel-docs).
On Monday we got to the UEA School of Computing Sciences (thanks for having us), and I started the day presenting the code running both websites to Martin Packman. Then I went on adding support for the no-lc-dist flag that had been added to yelp-tools. It's a new feature that has not yet been advertised because using it meant translations wouldn't work on help.gnome.org. But that's over and modules can start using it, it will mean smaller tarballs and faster 'make distcheck', as only the .po file will have to be added to the tarballs.
January 28th 2014
Later that day I took a detour from documentation to ponder some health check for GNOME applications, I copied some metrics from Andre's "Indicators for Missing Maintainership in Collaborative Open Source Projects" paper, and wrote some code to aggregate data from jhbuild modulesets, doap files, and git logs. I pushed my work-in-progress to people.gnome.org/~fpeters/health/.
And here we are on Tuesday, and feature of the day is the possibility to have stable documents directly updated from git branches. This is nice for the documentation team as that won't require maintainers to publish new tarballs to get documentation changes on the websites, and for the same reason it will also be great for translators. It has become quite more useful to continue on translating documentation even after scheduled GNOME releases.
This is all technical stuff but an hackfest is not limited to that, and thanks to Kat and Dave for organizing it (and the hosting, and the breakfasts, many thanks), other participants, and the GNOME foundation for its sponsorship, it's been great days, and surely the remaining days will be as productive. And then it will be back to Brussels, and FOSDEM...
This article is about how to improve high availability on stateful firewalls using netfilter's conntrack synchronization. In a later article we will discuss on how to automatically remove static routes when a gateway is down (Gateway Fail Over Wan)
Stateful based firewalling is now used on most part of firewalling architectures.
The stateful mode is based on keeping track of the network connections to make sysadmin's life better
To view active conntrack and deal with it, you could install conntrack package. it will provide this kind of commands :
conntrack -S (Show statistics)
or
conntrack -L (List conntrack)
In our use case, we need to synchronize network connections tracking on two firewalls nodes. This is ensured by a daemon called conntrackd
apt-get install conntrackd
Conntrackd, has three replication approaches, “no track”, “ft-fw” and “alarm”.
More information: http://conntrack-tools.netfilter.org/manual.html#sync
We choose ft-fw mode because it's ready for production environnement, more stable and it works well.
To use ft-fw, you could reuse example as your configuration and make some little changes, as your network addresses.
zcat /usr/share/doc/conntrackd/examples/sync/ftfw/conntrackd.conf.gz > /etc/conntrackd/conntrackd.conf
Conntrackd, should start as daemon at boot starting, so we define this by init scripts and /etc/default/conntrackd in Debian.
As you drop all undesired traffic, we need to add some rules to allow traffic came from conntrackd on both nodes:
# ------------------------- Conntrack iptables -A INPUT -p udp -i $IFCONN -d 225.0.0.50/32 --dport 3780 -j ACCEPT iptables -A INPUT -p udp -i $IFCONN -s $IPCONN --dport 694 -j ACCEPT
As your configuration should work without any problem, now we could play with the daemons.
Conntrackd, provide commands that they works like a client/server. So we can ask conntrackd by cli commands to know cache / statistics /etc..
Here are some examples :
To show tables which are synchronised , we could use this commands. See external cache (cache which is on gw02 was synchronised to gw01):
root@gw02:~# conntrackd -e
See internal cache :
root@gw02:~# conntrackd -i
You can compare results and counting them :
root@gw02:~# conntrackd -e | wc -l root@gw02:~# 325 root@gw01:~# conntrackd -i | wc -l root@gw02:~# 328
And show more statistics :
conntrackd -s
As you can see, ft-fw is asynchronous. Our setup is “Active-Backup”. You can sync mannually for fun:
root@gw02:~# conntrackd -n
Conntrackd, provide Active-Active setup but it's still in asymmetric mode. For more information you can read the manual : http://conntrack-tools.netfilter.org/manual.html#sync-aa
At Easter-eggs we use Python and WSGI for web applications development.
The last few months some of our applications crashed periodically. Thanks to WebError ErrorMiddleware, we receive an email each time an internal server error occurs.
For example someone tried to retrieve all of our french territories data with the API.
The problem is simple: when the request headers contains non UTF-8 characters, the WebOb Request object throws an UnicodeDecodeError
exception because it expects the headers to be encoded in UTF-8.
End-user tools like web browsers generate valid UTF-8 requests with no effort, but non UTF-8 requests can be generated by some odd softwares or by hand from a ipython shell.
Let's dive into the problem in ipython :
In [1]: url = u'http://www.easter-eggs.com/é' In [2]: url Out[2]: u'http://www.easter-eggs.com/\xe9' In [3]: url.encode('utf-8') Out[3]: 'http://www.easter-eggs.com/\xc3\xa9' In [4]: latin1_url = url.encode('latin1') Out[4]: 'http://www.easter-eggs.com/\xe9' In [5]: latin1_url.decode('utf-8') [... skipped ...] UnicodeDecodeError: 'utf8' codec can't decode byte 0xe9 in position 27: unexpected end of data
This shows that U+00E9 is the Unicode codepoint for the 'é'
character (see Wikipedia), that its UTF-8 encoding are the 2 bytes '\xc3\xa9'
, and that decoding in UTF-8 a latin1 byte throws an error.
The stack trace attached to the error e-mails helped us to find that the UnicodeDecodeError
exception occurs when calling one of these Request
methods: path_info
, script_name
and params
.
So we wrote a new WSGI middleware to reject mis-encoded requests, returning a bad request HTTP error code to the client.
from webob.dec import wsgify import webob.exc @wsgify.middleware def reject_misencoded_requests(req, app, exception_class=None): """WSGI middleware that returns an HTTP error (bad request by default) if the request attributes are not encoded in UTF-8. """ if exception_class is None: exception_class = webob.exc.HTTPBadRequest try: req.path_info req.script_name req.params except UnicodeDecodeError: return exception_class(u'The request URL and its parameters must be encoded in UTF-8.') return req.get_response(app)
The source code of this middleware is published on Gitorious: reject-misencoded-requests
We could have guessed the encoding, and set the Request.encoding
attribute, but it would have fixed only the read of PATH_INFO
and SCRIPT_NAME
, and not the POST
and GET
parameters which are expected to be encoded only in UTF-8.
That's why we simply return a 400 bad request HTTP code to our users. This is simpler and does the work.
Pour le besoin d'un de nos clients, j'ai eu à valider dans un script python un très grand nombre d'adresse e-mails. En y pensant cette situation est courante : combien de bases de données clients contiennent un grand nombre d'adresse mail qui, pour certaines, ont pu être saisies des années auparavant, sans forcément de validation efficace (double opt-in par exemple). On imagine bien ainsi qu'en utilisant une telle adresse mail des années plus tard, rien n'est moins sûr que notre mail arrive à destination. C'est pourquoi il nous a été utile d'écrire un script capable de détecter les adresses mails assurément invalides d'une base de données.
Pour la réalisation de ce script (écrit en python), je me suis tout d'abord en toute logique tourné vers une librairie qui me semblait parfaitement adaptée validate_email. Cette librairie implémente une méthodologie en plusieurs étapes :
Il est par ailleurs possible de définir jusqu’où on souhaite pousser la validation : syntaxique uniquement, syntaxique + validation DNS du MX ou sinon validation complète.
Mes premières utilisations de cette librairie m'ont démontré que son utilisation pour une validation en masse d'adresses mail n'était pas du tout optimale (+ de 24heures pour la validation d'environ 70 000 adresses mails même incomplètes). J'ai alors développé une librairie similaire optimisant les phases 2 et 3. En effet pourquoi valider plusieurs fois un même nom de domaine ou encore pourquoi valider une connexion SMTP à un même serveur de mail. Voici donc une librairie reprenant cette méthodologie et l'optimisant pour une validation en masse, en insérant simplement un mécanisme de mise en cache des vérifications communes à un même domaine. Pour vous donner une idée de l'optimisation apportée, une validation d'environ 70 000 adresses mails (validation syntaxique + validation d'une connexion MX) prend environ 1h30 à 2h. Cette librairie dénommée mass_validate_email est disponible ici et publié sous licence LGPL.
Here @Easter-eggs[1], like others, we start playing with the awesome CEPH[2] distributed object storage. Our current use of it is the hosting of virtual machines disks.
Our first cluster was just installed this week on tuesday. Some non production virtual machines where installed on it and the whole cluster added to our monitoring systems.
On thursday evening, one of the cluster nodes went down due to cpu overhead (to be investigated, looks like a fan problem).
Monitoring systems send us alerts as usual, and we discovered that CEPH just did the job :
On friday morning, we repaired the dead server, and boot it again:
Incident closed!
What to say else?
As we just configured our first CEPH[1] cluster, we needed to move our current virtual machines (using raw images stored on standard filesystem) so they use the RBD block device provided by CEPH.
We use Libvirt[2] and Kvm[3] to manage our virtual machines.
This step can be done offline:
virsh shutdown vmfoo
qemu-img convert -O rbd /var/lib/libvirt/images/vmfoo.img rbd:libvirt-pool/vmfoo
virsh edit vmfoo
<disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/var/lib/libvirt/images/vmfoo.img'/> <target dev='vda' bus='virtio'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk>
devient:
<disk type='network' device='disk'> <driver name='qemu'/> <auth username='libvirt'> <secret type='ceph' uuid='sec-ret-uu-id'/> </auth> <source protocol='rbd' name='libvirt-pool/vmfoo'> <host name='10.0.0.1' port='6789'/> <host name='10.0.0.2' port='6789'/> ... </source> <target dev='vda' bus='virtio'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk>
virsh start vmfoo
The trick here is to use migration support in libvirt/kvm and the ability to provide a different xml definition for the target virtual machine:
qemu-img info /var/lib/libvirt/images/vmfoo.img
qemu-img create -f rbd rbd:libvirt-pool/vmfoo XXG
virsh dumpxml vmfoo > vmfoo.xml
<disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/var/lib/libvirt/images/vmfoo.img'/> <target dev='vda' bus='virtio'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk>
devient:
<disk type='network' device='disk'> <driver name='qemu'/> <auth username='libvirt'> <secret type='ceph' uuid='sec-ret-uu-id'/> </auth> <source protocol='rbd' name='libvirt-pool/vmfoo'> <host name='10.0.0.1' port='6789'/> <host name='10.0.0.2' port='6789'/> ... </source> <target dev='vda' bus='virtio'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk>
virsh migrate --live --persistent --copy-storage-all --verbose --xml vmfoo.xml vmfoo qemu+ssh://target_node/system
Notes:
Today i want to publish my scripts. Few days ago, I decided to use Git to release them. But it's only visible by me on my servers. So i decided to use Viewgit, a web interface in php. It's cool! Now, i can see my scripts with my browser! But in fact, I'm unhappy because nobody is able to use git mechanism like “git clone”. So, I want to use “git over http”, with git-http-backend.
For this environment, I use Nginx web server over Debian to serve files.
The installation of viewgit is pretty easy, just download, untar and play. You must drop your git projects, in “projects” directory, like me :
/var/www/viewgit/projects
And declare your projects in /var/www/viewgit/inc/localconfig.php
Your nginx config looks like this at this time :
vi /etc/nginx/sites-available/viewgit
server { listen 10.0.0.6:80; root /var/www/viewgit; index index.php; server_name git.d2france.fr; location ~ \.php$ { include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:9001; }
Before using git over http, you need to know two fundamentals. First, you want to allow people to download your projects, and second, you want to allow people to make modifications on your projects.
To play around git clone, fetch and pull requests, git uses http.uploadpack service.
To play around git push, git uses http.receivepack service.
To provide those services, your need to use GIT-HTTP-BACKEND as a backend cgi script for your web server and nginx cgi server (fcgiwrap) to run it.
apt-get install git-http-backend fcgiwrap
With Nginx, the configuration could be like this :
server { listen 10.0.0.6:80; root /var/www/viewgit; index index.php; server_name git.d2france.fr; location ~ \.php$ { include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:9001; } location ~ ^projects/.*/(HEAD|info/refs|objects/info/.*|git-upload-pack)$ { root /var/www/viewgit/projects; include fastcgi_params; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param PATH_INFO $uri; fastcgi_param GIT_PROJECT_ROOT /var/www/viewgit/projects; fastcgi_param GIT_HTTP_EXPORT_ALL ""; fastcgi_pass unix:/var/run/fcgiwrap.socket; } }
Here, i just want to share my scripts. So, I only allow git-upload-pack requests
Now you can clone your git repositories with this command:
git clone http://server/projects/foobar
As you can see, on each project in viewgit you can't add any information like the url of your git. A friend made a plugin for that. You should find his work at viewgit-projectinfos-plugin.
On a LAN with IPv6 autoconfiguration enabled (using a radvd service for example), it is often needed to set static addresses for servers and so deactivate IPv6 autoconf on them.
With Debian 5.0 at least, it should be as easy as adding:
pre-up sysctl -w net.ipv6.conf.eth0.autoconf=0
in /etc/network/interfaces. But it doesn't works, because unless you set up some IPv6 adresses before in the init process, the ipv6 module is not loaded and so net.ipv6 doesn't exist. To fix this, just explicitely add ipv6 in /etc/modules...
Same things happens if you wan't to disable RA with net.ipv6.conf.IFACE.accept_ra=0
Few weeks ago, I needed to convert qcow2 image to raw image. So, I executed this command:
qemu-img convert -f qcow2 -O raw vm-foo.qcow2 vm-foo.raw
After that, I had an unsparse image because qemu-img don't output sparse file. I saw this by running this command:
qemu-img info vm-foo.img
or
ls -lksh vm-foo.img
So now, I want to convert this new vm-image to a sparse file, because I want to free space in my file system. As you could know, in sparse file, zero-data, don't takes place in your file system instead of unsparse file.
Moreover, when files are deleted, their data stay in place on the disk (just indexes was deleted).
In my case, i want to optimize my future sparse file vm-image, and I decide to force zero-data in my vm-image.
So, on my started guest, I wrote zero-data as far as possible, using this command:
root@foo# dd if=/dev/zero of=/tmp/zerotxt bs=1M root@foo# sync root@foo# rm /tmp/zerotxt
Now, I shutdown the guest, and convert unsparse file to sparsed file by using cp command:
cp --sparse=always vm-foo.raw vm-foo.raw-sparse
Well done, I got a clean sparse file!
qemu-img info vm-foo.raw-sparse image: vm-foo.raw-sparse file format: raw virtual size: 40G (42949672960 bytes) disk size: 6.3G
Cette problématique semble peut-être simple de prime abord, mais une supervision efficace de la synchronisation d'annuaires OpenLDAP n'est pas aussi triviale qu'elle peut en avoir l'air. Toute la complexité réside dans le mécanisme relativement simpliste de réplication avec syncrepl : un schéma LDAP périmé ou des ACL mal définies, peut facilement entraîner une désynchronisation de vos annuaires sans que cela soit très visible.
Le mécanisme de réplication syncrepl se base sur des identifiants de versions des données contenues dans l'annuaire, pour déterminer quelles informations doivent être répliquées et quelles informations est la plus à jour (dans le cas d'une réplication master-master). Ces identifiants de versions sont stockés dans l'attribut contextCSN de la racine de l'annuaire et dans les attributs entryCSN de chacun des objets de l'annuaire. Les valeurs de ces attributs (obligatoirement indexés) sont construites à l'aide de la date de dernière modification. Cela permet, via l'overlay syncrepl d'OpenLDAP, de déterminer à partir du contextCSN d'un répliqua, les objets LDAP de l'annuaire source modifiés depuis et qui devront donc être synchronisés. La réplication d'un objet consiste ensuite à transférer l'objet complet d'un annuaire à l'autre, sans distinction d'attributs : tous les attributs seront répliqués quelle que soit la modification l'ayant entraînée. Ce mécanisme très simple n'est malheureusement pas très robuste et des cas de désynchronisation son relativement fréquents. Une bonne supervision est alors indispensable, d'autant plus qu'une synchronisation en panne n'empêche pas pour autant un répliqua de répondre aux requêtes qui lui sont adressées.
Un plugin de check Nagios (ou Icinga) existait déjà mais il se basait uniquement sur la valeur du contextCSN des annuaires, sans vérification objet par objet, voire attribut par attribut. Il pouvait alors laisser passer une désynchronisation.
J'ai donc eu l'occasion d'en développer un qui, selon moi, aborde plus globalement la supervision d'une réplication syncrepl. Ce plugin ne se contentera donc par simplement de vérifier les valeurs des attributs contextCSN, mais permettra une vérification des objets présents dans chacun des annuaires, de la valeur de leur attribut entryCSN, voire même de la valeur de l'ensemble de leurs attributs. Il est évident qu'une supervision plus exhaustive sera plus coûteuse en terme de ressource, et c'est pourquoi j'ai voulu, à travers différents paramètres, permettre une vérification plus ou moins complète de l'état de synchronisation :
Il est a noter cependant, que la supervision la plus complète sur un annuaire d'environ 10 000 objets, ne prend que quelques secondes (entre 3 et 10 secondes en fonction de la charge des serveurs).
Pour télécharger ce plugin, c'est ici
Exemple d'utilisation :
check_syncrepl_extended \ -p ldap://ldap0.example.lan \ -c ldap://ldap1.example.lan/ \ -D 'uid=nagios,ou=sysaccounts,o=example' \ -P 'password' \ -b 'o=example' -a -n
Définition de la command correspondante dans Nagios :
define command { command_name check_syncrepl command_line /usr/local/lib/nagios/plugins/check_syncrepl_extended -p $ARG1$ -c ldap://$HOSTADDRESS$/ -b $ARG2$ -D '$ARG3$' -P '$ARG4$' -a -n }
Définition du service correspondant :
define service{ use generic-service service_description LDAP Syncrepl check_command check_syncrepl!ldap://ldap0.example.lan!o=example!uid=nagios,ou=sysaccounts,o=example!password host_name ldap1 }
Lors de la mise en place d'un serveur de mail, à un moment ou un autre, on aura toujours besoin d'envoyer un mail de test pour tester une connexion IMAP ou POP. Pour faciliter tout cela, nous avons mis au point une boite à outils bien pratique, que nous avons nommée Mailt. Elle se compose de trois outils (pour le moment) :
Vous pourrez également facilement valider une connexion STARTTLS ou SMTPS, authentifiée ou non, et avec le paramètre --debug, ce sera comme si vous tapiez tout cela manuellement dans une session telnet ou openssl s_client.
Cette suite d'outils est écrite en Python et utilise des librairies standards, le plus souvent déjà présentes sur vos serveurs (paquet Debian python-support). Elle est facilement installable au travers d'un paquet Debian très léger, téléchargeable ici ou en ajoutant le repos Debian suivant et installer le paquet mailt :
deb http://debian.zionetrix.net/debian/ squeeze mailt
Remarque : Ces paquets sont également disponibles pour la version testing de Debian (Wheezy).
Lorsqu'un utilisateur accède à votre wiki Dokuwiki alors qu'il n'en a pas le droit, un message l'informe que l'accès à cette page lui est interdit et lui suggère de s'authentifier. Ce message ne correspond pas forcément à la réalité en fonction de l'utilisation que vous faites de votre wiki. Il peut être intéressant alors d'avoir une page personnalisable qui avec vos propres mots, expliquera à l'utilisateur pourquoi il obtient cette page ou encore comment accéder à la page souhaitée.
Dans cette optique, j'ai écrit un plugin Dokuwiki dénommé deniedpage permettant de définir une page de votre wiki sur laquelle l'utilisateur sera redirigé automatiquement en cas d'accès refusé à une page. Vous pourrez définir la page de votre choix en utilisant la page de configuration de la section administration. Comme n'importe quelle autre page de votre wiki, vous pourrez créer, et plus tard modifier cette page, en utilisant l'éditeur en ligne.
Grâce au gestionnaire d’extensions de Dokuwiki, l'installation de ce plugin se fait très simplement en copiant l'URL suivante dans le champ de téléchargement et en cliquant sur le bouton Télécharger :
https://github.com/brenard/dokuwiki-plugin-deniedpage/zipball/master
La mise à jour se fera ensuite tout aussi simplement en utilisant le bouton de Mettre à jour. Penser à activer le plugin après son installation et assurez- vous que votre page d'erreur personnalisée soit accessible à n'importe qui. Pour plus d'information sur ce plugin, vous pouvez consulter sa page sur le site Dokuwiki.org.
And I wasn't there, but that has nothing to do with GNOME, just that it conflicted with another important project I had for almost a year, Radio Roulotte and a recurring one, Radio Esperanzah!.
The idea of Radio Roulotte mostly came last year, it was about getting a caravan and two horses, to visit various villages, meet locals and produce a radio show with them. We were a small team to talk about it, and then preparing it, getting new contacts, requesting some money, reshaping the caravan, etc. but it only became real when we met and the horses arrived.
And the days were cut in two parts, travelling in the morning...
Road from Buzet to Soye, July 27th
Road from Floreffe to Buzet, July 26th
In the streets of Floriffoux, July 28th
... then assembling the studio, and that meant getting stuff out of the caravan, getting other stuff in, including electrical power, calibrating the satellite dish, etc.
In Soye, July 27th
All of this to get ready at 6pm to produce one hour of radio, live with locals.
Studio in Floreffe, July 25th
Studio in Soye, July 27th
Studio in Floriffoux, July 28th (outside for the last one)
And as quickly as it started the week was over, we said goodbye to some team members, took a day almost off, and started welcoming members of the radio Esperanzah! team. That project is well oiled, it was the 10th time it happened, it's about covering the various parts of the Esperanzah! music festival.
So we went and assembled things again, the studio as well as our work room, the FM transmitter and computers below the stages to record the concerts.
Hardware below a stage
One day schedule on the board
The festival started, and we kept working, presenting the daily programs, interviewing artists and other participants, recording in the alleys...
Esperanzah Camping filling up
A concert on the stage
And for my part, mixing the concerts, so we could broadcast one on the evening and offer them to the artists. For the first time I did it with Ardour 3 (a git snapshot actually, 44fc92c3) and it went beautifully.
My horizon for three days
As usual I only attended a few concerts, but at least I got to see An Pierlé and Asian Dub Foundation.
So here you are, you now know what I did during your GUADEC. I heard many good things about Brno, let's work now to get 3.10 rocking in Septembre; and see you in Strasbourg for next GUADEC.
Pour un de nos clients, la problématique suivante s'est posée : afin de supporter l'authentification SSO en place dans son infrastructure, il était nécessaire que Dokuwiki n'authentifie pas directement les utilisateurs mais fasse confiance à l'authentification effectué par Apache. Pour cela, Dokuwiki, avec son système de plugin d'authentification, implémente une solution relativement simple :
Si comme nous, vous avez besoin de récupérer par ailleurs des informations sur vos utilisateurs Dokuwiki dans un annuaire LDAP tout en faisant confiance à l'authentification faite par Apache, vous serez heureux d'apprendre que c'est désormais possible. En effet, le plugin d'authentification LDAP de Dokuwiki (authldap) n'implémentait pas jusqu'ici cette méthode trustExternal() et affichait donc systématiquement un formulaire à l'utilisateur, quand bien même celui-ci avait déjà été authentifié par Apache. Nous avons implémenté cette méthode et l'avons proposé au projet Dokuwiki. Pour voir la Pull Request correspondante sur Github, c'est ici
Long time without any activity here, and more generally less time with computers those past months, even though I visited Lyon for the JDLL in November and I was of course present in Brussels for the FOSDEM and the developer experience hackfest that happened just before. (I didn't write about it but it was totally unexpected for me to find myself there with two other motivated devhelp developers, many thanks to Aleksander and Thomas.)
But I am now finally back in action, installed in my new place, and for the occasion here are some pictures, starting perhaps with the most raw moment, after a few weeks:
Walls and flooring removed, October 2012
I kept my other flat for a few months but had to let it go by December, and by chance my new upstairs neighbour offered me her spare bedroom, as well as her attic to store my boxes (thanks Fleur!). Still it kept for longer than expected and I became quite impatient of settling in my new place, and that finally happened ~10 days ago.
Temporary office space
The packing boxes left the attic but most of them are still unopened.
Packing boxes moved to the future living room
The kitchen is almost done, not shown in the picture: a fridge is still missing.
Black and white kitchen
And I have the most fabulous bathroom.
Tetris bathroom
Thanks to my good friend Macha for the architect work she did, it's that nice because she always had the eyes on the smallest details.
For quite some time the access to recent files has been put forward in GNOME, it happened even more so in 3.6 with a "Recent Files" view in Files (née Nautilus), that makes use of a new recent files backend in gvfs.
This is all very nice but my daily activities still involve a lot of command line usage, and I didn't find any way to mark as recents the files I receive via mutt, the text files I create in vim, the pictures I resize with ImageMagick, etc. That always bothered me at the moment I wanted to access those files, but then I just copied a copy of the file to a scratch directory I had bookmarked, and went on with my work.
Until yesterday, as I finally decided to fix that, and quickly put together recent, a command line utility that just puts the file it gets as argument in the recent files list. It's very simple, uses GFile and GtkRecentManager, and the code is located there: recent.c. It's so simple I guess many others wrote something similar, but here you have, perhaps it will be useful.
On Wednesday GNOME 3.6 has been released; many thanks to all people involved, this release is definitely a great one. And then today it was my turn to be released, many thanks for the kind words (and phone calls, and visits).
A Coruña, 25 juillet 2012.
À peine atterri du GUADEC, empaqueté un nouveau sac; à peine un détour par Bruxelles, pris un train; à peine arrivé à Floreffe, la vie non-stop, une fantastique équipée, merci toutes, tous.
Mémoires de Radio Esperanzah! 2012
Et puis déjà le retour, à Bruxelles et au travail, entre côtes d'agneau et bord de canal (en hauts), entre zoning industriel et entrepôt désaffecté (en bas), une semaine toute drôle.
Mais pour la terminer, un retour par GNOME, 201ème commit digest et 40 bugs corrigés dans le produit "website".