Planet Libre-entreprise.org

July 31, 2014

Labs

wbmtranslator 0.7.3 released

* Updated German translation (Raymond Vetter). * Updated "webmin-core-modules.data". * Increased "send translation" combobox.

by Emmanuel Saracco at July 31, 2014 05:29 AM

April 18, 2014

Easter-eggs

Backporter un package Debian testing vers Debian stable

On souhaite backporter un package de testing vers la stable. En l'occurrence, pour notre exemple on prendra le package python-numpy pour illustrer les manipulations.

Prérequis

Forger un paquet debian, peut parfois prendre des allures de champs de bataille, surtout lorsque le paquet en question a un certain nombre de dépendance. Pour cloisonner le travail, je vous propose d'utiliser deboostrap.

On créer un environnement ::

apt-get install debootstrap debootstrap --arch amd64 wheezy ~/dbs-builddeb http://ftp.fr.debian.org/debian/

On se chroot dans le debootstrap ::

chroot dbs-builddeb

Nous allons avoir besoin de quelques outils de développement, que nous installons ::

apt-get install devscripts build-essential dh-buildinfo echo "export LANG=C" >> ~/.bashrc

Howto par l'exemple

On configure apt dans /etc/apt/source.list, tel que ::

  

## Wheezy deb http://ftp.fr.debian.org/debian wheezy main deb-src http://ftp.fr.debian.org/debian wheezy main # wheezy-backports deb http://ftp.fr.debian.org/debian wheezy-backports main contrib non-free ## Jessie #deb http://ftp.fr.debian.org/debian jessie main deb-src http://ftp.fr.debian.org/debian jessie main

On update le tout

apt-get update

On récupère les sources

apt-get source python-numpy

On récupère les dépendances, que l'on installe

apt-get build-dep python-numpy

On compile le code source

cd python-numpy-1.8.1 dch -i

python-numpy (1:1.8.1-1~etalabbpo70+1) unstable; urgency=low * Non-maintainer upload. * Backport to wheezy. -- Felix Defrance <felix.defrance@data.gouv.fr> Thu, 10 Apr 2014 14:22:32 +0000

dpkg-buildpackage -tc

C'est terminé ! On peut voir le package forgé dans le répertoire parent.

python-numpy_1.8.1-1~etalabbpo70+1.debian.tar.gz python-numpy_1.8.1-1~etalabbpo70+1_amd64.deb python-numpy_1.8.1-1~etalabbpo70+1.dsc python-numpy_1.8.1-1~etalabbpo70+1_amd64.changes

Installation du package

Pour une utilisation personnelle un dpkg -i suffira, sinon on ajoutera le package à un depot spécifiquement établi pour l'occasion par exemple..

by Félix Defrance at April 18, 2014 12:16 PM

March 27, 2014

Frédéric Péters

GNOME 3.12

Just like the schedule said, GNOME 3.12 was released today, and of course it's our best release ever — honest, you can really feel the whole GNOME 3 experience maturity. I've been quite busy in other projects in the recent months, and couldn't participate as much as I wanted but I nevertheless have a few perspectives to share, and people to thank.

/files/bright-day.jpeg

Foremost the release team, from that point of view, the landing was particularly soft, with very few freeze break requests which is a good sign. Hat tip to Matthias for the handling of .0, and all the blog posts he has been writing detailing the changes.

For the French translation team, where my part is quite small — mostly I attended Le Translathon and provided a few screenshots for the release notes — this also looks like a nice release, especially as new participants joined the team.

Last but not least the documentation team really kicked ass this cycle.

This is just three teams, they're parts of a big project, so I couldn't end it without thanking all other teams and persons, from developers to testers, from designers to users, from the foundation board to the engagement team, GNOME is the sum of us all.

Let's celebrate.

by Frédéric Péters at March 27, 2014 09:36 PM

January 31, 2014

Frédéric Péters

Last days of documentation hackfest

It's already the last day of the winter documentation hackfest in Norwich (pronounced like Porridge), tomorrow we'll drive to Brussels, for FOSDEM, and here comes a second report of my activities.

On Tuesday, after the work on git stable updates (see last post), I concentrated on various speed improvements, including a small change to our own local configuration that makes wonder (it had an hack to use XSL files from a local yelp-xsl copy but that broke some timestamping, and caused some modules to be rebuilt endlessly). In normal operation a full build of help.gnome.org is now about ten minutes.

Kat had made a request to have application icons displayed in the index pages, as they are now included in Mallard documentation titles. I started that on Wednesday and it went easier than expected, the pages indeed look nicer now.

The other important part of Wednesday was a request from Petr, to get the getting started pages integrated on the web site. The particular thing about the gnome-getting-started-docs module is that it installs pages to an existing document (gnome-help), making use of Mallard generated indexes and links to provide an integrated document. Unfortunately that operation mode didn't go well with the code, as it handled tarballs one after the other and was rather confused when another document with the same name, but no index page, came in. It required quite a lot of changes, and I'm not happy about all of them as there's quite a bit of code duplication and some hardcoded parts, but at the end of the day it was working, and you can now go and view the Getting Started material on the web site.

Documentation hackfest

For the last day I switched to the developer docs, and as I looked at Allan's notes and thought about a way forward, I went back to the code and discovered I added the possibility to import documentation from wiki pages almost three years ago, during the 3.0 hackfest in Bangalore... It seemed like a good fit for the serie of "How Do I" pages mostly created by Ryan and Matthias so I refreshed the code and voila! the pages got on the Guides page.

During the last year or so many elements were removed from the frontpage, first the platform grid, then the "10 minutes tutorial" carousel, but that left the page quite empty. To wrap up the week, I have now used that extra space to provide direct access to more of the internal indexes.

And that's what I did during the hackfest. I already gave thanks but here they are again, Kat & Dave, the UEA, the foundation, the participants and visitors.

by Frédéric Péters at January 31, 2014 06:49 PM

January 28, 2014

Frédéric Péters

First days of documentation hackfest

This is hackfest week, it's been a long time. I arrived in Norwich Saturday evening, after almost three hours in London Liverpool Street Station looking at trains being announced delayed, then cancelled, one after the other. Storms, trees, and powerlines do not mix well.

As there's FOSDEM next weekend, the hackfest was set to start on the Sunday, and it was well spent, triaging and fixing developer.gnome.org and help.gnome.org bugs, I forgot to take note of the number of bugs when I started, but each module got down to below 20. And what's especially nice is that many of the bugs I reassigned to other modules quickly got fixed (Dave at the hackfest handled them for gnome-devel-docs).

On Monday we got to the UEA School of Computing Sciences (thanks for having us), and I started the day presenting the code running both websites to Martin Packman. Then I went on adding support for the no-lc-dist flag that had been added to yelp-tools. It's a new feature that has not yet been advertised because using it meant translations wouldn't work on help.gnome.org. But that's over and modules can start using it, it will mean smaller tarballs and faster 'make distcheck', as only the .po file will have to be added to the tarballs.

Working

January 28th 2014

Later that day I took a detour from documentation to ponder some health check for GNOME applications, I copied some metrics from Andre's "Indicators for Missing Maintainership in Collaborative Open Source Projects" paper, and wrote some code to aggregate data from jhbuild modulesets, doap files, and git logs. I pushed my work-in-progress to people.gnome.org/~fpeters/health/.

And here we are on Tuesday, and feature of the day is the possibility to have stable documents directly updated from git branches. This is nice for the documentation team as that won't require maintainers to publish new tarballs to get documentation changes on the websites, and for the same reason it will also be great for translators. It has become quite more useful to continue on translating documentation even after scheduled GNOME releases.

This is all technical stuff but an hackfest is not limited to that, and thanks to Kat and Dave for organizing it (and the hosting, and the breakfasts, many thanks), other participants, and the GNOME foundation for its sponsorship, it's been great days, and surely the remaining days will be as productive. And then it will be back to Brussels, and FOSDEM...

by Frédéric Péters at January 28, 2014 05:41 PM

December 23, 2013

Easter-eggs

Active/Backup iptables tracking connexions between two gateway

This setup is interesting when you want to avoid SPOF on your firewall/gateway which are on top of your network architecture.

This article is about how to improve high availability on stateful firewalls using netfilter's conntrack synchronization. In a later article we will discuss on how to automatically remove static routes when a gateway is down (Gateway Fail Over Wan)

Need stateful mode

Stateful based firewalling is now used on most part of firewalling architectures. The stateful mode is based on keeping track of the network connections to make sysadmin's life better ;)

To view active conntrack and deal with it, you could install conntrack package. it will provide this kind of commands :

conntrack -S (Show statistics)

or

conntrack -L (List conntrack)

Stateful Syncing between nodes

In our use case, we need to synchronize network connections tracking on two firewalls nodes. This is ensured by a daemon called conntrackd

apt-get install conntrackd

Conntrackd, has three replication approaches, “no track”, “ft-fw” and “alarm”.

  • no-track: use the best effort syncing tables and no control was made when tables are replicate.
  • ft-fw: use reliable protocol to perform message tracking. So that sync error or corruption are permitted.
  • alarm: Which allow to set syncing tables interval. This option require a lot of bandwhitch.

More information: http://conntrack-tools.netfilter.org/manual.html#sync

We choose ft-fw mode because it's ready for production environnement, more stable and it works well.

To use ft-fw, you could reuse example as your configuration and make some little changes, as your network addresses.

zcat /usr/share/doc/conntrackd/examples/sync/ftfw/conntrackd.conf.gz > /etc/conntrackd/conntrackd.conf

Conntrackd, should start as daemon at boot starting, so we define this by init scripts and /etc/default/conntrackd in Debian.

Iptables Rules

As you drop all undesired traffic, we need to add some rules to allow traffic came from conntrackd on both nodes:

# ------------------------- Conntrack
iptables -A INPUT -p udp -i $IFCONN -d 225.0.0.50/32 --dport 3780 -j ACCEPT
iptables -A INPUT -p udp -i $IFCONN -s $IPCONN  --dport 694 -j ACCEPT

Check your synchronisation

As your configuration should work without any problem, now we could play with the daemons.

Conntrackd, provide commands that they works like a client/server. So we can ask conntrackd by cli commands to know cache / statistics /etc..

Here are some examples :

To show tables which are synchronised , we could use this commands. See external cache (cache which is on gw02 was synchronised to gw01):

root@gw02:~# conntrackd -e 

See internal cache :

root@gw02:~# conntrackd -i

You can compare results and counting them :

root@gw02:~# conntrackd -e | wc -l
root@gw02:~# 325
root@gw01:~# conntrackd -i | wc -l
root@gw02:~# 328

And show more statistics :

conntrackd -s

As you can see, ft-fw is asynchronous. Our setup is “Active-Backup”. You can sync mannually for fun:

root@gw02:~# conntrackd -n

Conntrackd, provide Active-Active setup but it's still in asymmetric mode. For more information you can read the manual : http://conntrack-tools.netfilter.org/manual.html#sync-aa

by Félix Defrance at December 23, 2013 05:04 PM

November 05, 2013

Easter-eggs

Handling misencoded HTTP request in Python WSGI applications

At Easter-eggs we use Python and WSGI for web applications development.

The last few months some of our applications crashed periodically. Thanks to WebError ErrorMiddleware, we receive an email each time an internal server error occurs.

For example someone tried to retrieve all of our french territories data with the API.

The problem is simple: when the request headers contains non UTF-8 characters, the WebOb Request object throws an UnicodeDecodeError exception because it expects the headers to be encoded in UTF-8.

End-user tools like web browsers generate valid UTF-8 requests with no effort, but non UTF-8 requests can be generated by some odd softwares or by hand from a ipython shell.

Let's dive into the problem in ipython :

In [1]: url = u'http://www.easter-eggs.com/é'

In [2]: url
Out[2]: u'http://www.easter-eggs.com/\xe9'

In [3]: url.encode('utf-8')
Out[3]: 'http://www.easter-eggs.com/\xc3\xa9'

In [4]: latin1_url = url.encode('latin1')
Out[4]: 'http://www.easter-eggs.com/\xe9'

In [5]: latin1_url.decode('utf-8')
[... skipped ...]
UnicodeDecodeError: 'utf8' codec can't decode byte 0xe9 in position 27: unexpected end of data

This shows that U+00E9 is the Unicode codepoint for the 'é' character (see Wikipedia), that its UTF-8 encoding are the 2 bytes '\xc3\xa9', and that decoding in UTF-8 a latin1 byte throws an error.

The stack trace attached to the error e-mails helped us to find that the UnicodeDecodeError exception occurs when calling one of these Request methods: path_info, script_name and params.

So we wrote a new WSGI middleware to reject mis-encoded requests, returning a bad request HTTP error code to the client.

from webob.dec import wsgify
import webob.exc


@wsgify.middleware
def reject_misencoded_requests(req, app, exception_class=None):
    """WSGI middleware that returns an HTTP error (bad request by default) if the request attributes
    are not encoded in UTF-8.
    """
    if exception_class is None:
        exception_class = webob.exc.HTTPBadRequest
    try:
        req.path_info
        req.script_name
        req.params
    except UnicodeDecodeError:
        return exception_class(u'The request URL and its parameters must be encoded in UTF-8.')
    return req.get_response(app)

The source code of this middleware is published on Gitorious: reject-misencoded-requests

We could have guessed the encoding, and set the Request.encoding attribute, but it would have fixed only the read of PATH_INFO and SCRIPT_NAME, and not the POST and GET parameters which are expected to be encoded only in UTF-8.

That's why we simply return a 400 bad request HTTP code to our users. This is simpler and does the work.

by Romain Soufflet at November 05, 2013 03:46 PM

October 23, 2013

Easter-eggs

Validation en masse d'adresses mail

mail Pour le besoin d'un de nos clients, j'ai eu à valider dans un script python un très grand nombre d'adresse e-mails. En y pensant cette situation est courante : combien de bases de données clients contiennent un grand nombre d'adresse mail qui, pour certaines, ont pu être saisies des années auparavant, sans forcément de validation efficace (double opt-in par exemple). On imagine bien ainsi qu'en utilisant une telle adresse mail des années plus tard, rien n'est moins sûr que notre mail arrive à destination. C'est pourquoi il nous a été utile d'écrire un script capable de détecter les adresses mails assurément invalides d'une base de données.

Pour la réalisation de ce script (écrit en python), je me suis tout d'abord en toute logique tourné vers une librairie qui me semblait parfaitement adaptée validate_email. Cette librairie implémente une méthodologie en plusieurs étapes :

  1. Une validation syntaxique de l'adresse mail pour commencer ;
  2. Une validation du nom de domaine en récupérant via une requête DNS le serveur de mail du domaine (enregistrement MX) ;
  3. Vérification du serveur mail en s'y connectant, en vérifiant la réponse en cas d'une commande HELO du protocole SMTP, voire même de sa réponse en cas d'un test d'envoi de mail à l'adresse à vérifier (commande SMPT RCPT TO) ;

Il est par ailleurs possible de définir jusqu’où on souhaite pousser la validation : syntaxique uniquement, syntaxique + validation DNS du MX ou sinon validation complète.

Mes premières utilisations de cette librairie m'ont démontré que son utilisation pour une validation en masse d'adresses mail n'était pas du tout optimale (+ de 24heures pour la validation d'environ 70 000 adresses mails même incomplètes). J'ai alors développé une librairie similaire optimisant les phases 2 et 3. En effet pourquoi valider plusieurs fois un même nom de domaine ou encore pourquoi valider une connexion SMTP à un même serveur de mail. Voici donc une librairie reprenant cette méthodologie et l'optimisant pour une validation en masse, en insérant simplement un mécanisme de mise en cache des vérifications communes à un même domaine. Pour vous donner une idée de l'optimisation apportée, une validation d'environ 70 000 adresses mails (validation syntaxique + validation d'une connexion MX) prend environ 1h30 à 2h. Cette librairie dénommée mass_validate_email est disponible ici et publié sous licence LGPL.

by Benjamin Renard at October 23, 2013 09:22 AM

September 27, 2013

Easter-eggs

When CEPH comes to rescue

Ceph

Here @Easter-eggs[1], like others, we start playing with the awesome CEPH[2] distributed object storage. Our current use of it is the hosting of virtual machines disks.

Our first cluster was just installed this week on tuesday. Some non production virtual machines where installed on it and the whole cluster added to our monitoring systems.

Cluster

On thursday evening, one of the cluster nodes went down due to cpu overhead (to be investigated, looks like a fan problem).

Monitoring systems send us alerts as usual, and we discovered that CEPH just did the job :) :

  • the server lost was detected by other nodes
  • CEPH started to replicate pgs between other nodes to maintain our replication level (this introduced a bit of load on virtual machines during the sync)
  • virtual machines that were running on the dead node were not alive anymore, but we just add to manually start them on another node (pacemaker is going to be setup on this cluster to manage this automagically)

On friday morning, we repaired the dead server, and boot it again:

  • the server automatically joined the CEPH cluster again
  • osd on this server were added automatically in the cluster
  • replication started to get an optimal replication state

Incident closed!

What to say else?

  • thanks to CEPH and the principle of server redundancy to let us sleep in our home instead of working a night in the datacenter
  • thanks to CEPH for being so magical
  • let's start the next step: configure pacemaker for automatic virtual machines failover on cluster nodes

Notes

[1] http://www.easter-eggs.com/

[2] http://ceph.com/

by Emmanuel Lacour at September 27, 2013 01:03 PM

[Libvirt] Migrating from on disk raw images to RBD storage

Ceph

As we just configured our first CEPH[1] cluster, we needed to move our current virtual machines (using raw images stored on standard filesystem) so they use the RBD block device provided by CEPH.

We use Libvirt[2] and Kvm[3] to manage our virtual machines.

Libvirt

Migration with virtual machine downtime




This step can be done offline:

  • stop the virtual machine
 virsh shutdown vmfoo
  • convert the image to rbd
 qemu-img convert -O rbd /var/lib/libvirt/images/vmfoo.img rbd:libvirt-pool/vmfoo
  • update the VM configuration file
 virsh edit vmfoo
 <disk type='file' device='disk'>
   <driver name='qemu' type='raw' cache='none'/>
   <source file='/var/lib/libvirt/images/vmfoo.img'/>
   <target dev='vda' bus='virtio'/>
   <alias name='virtio-disk0'/>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
 </disk>

devient:

 <disk type='network' device='disk'>
   <driver name='qemu'/>
   <auth username='libvirt'>
     <secret type='ceph' uuid='sec-ret-uu-id'/>
   </auth>
   <source protocol='rbd' name='libvirt-pool/vmfoo'>
     <host name='10.0.0.1' port='6789'/>
     <host name='10.0.0.2' port='6789'/>
     ...
   </source>
   <target dev='vda' bus='virtio'/>
   <alias name='virtio-disk0'/>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
 </disk>
  • restart the virtual machine
virsh start vmfoo

Migration without downtime

The trick here is to use migration support in libvirt/kvm and the ability to provide a different xml definition for the target virtual machine:

  • get the current vm disk informations
 qemu-img info /var/lib/libvirt/images/vmfoo.img
  • create an empty rbd of the same size
 qemu-img create -f rbd rbd:libvirt-pool/vmfoo XXG
  • get the current vm configuration
 virsh dumpxml vmfoo > vmfoo.xml
  • edit this configuration to replace the on disk image by the rbd one
 <disk type='file' device='disk'>
   <driver name='qemu' type='raw' cache='none'/>
   <source file='/var/lib/libvirt/images/vmfoo.img'/>
   <target dev='vda' bus='virtio'/>
   <alias name='virtio-disk0'/>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
 </disk>

devient:

 <disk type='network' device='disk'>
   <driver name='qemu'/>
   <auth username='libvirt'>
     <secret type='ceph' uuid='sec-ret-uu-id'/>
   </auth>
   <source protocol='rbd' name='libvirt-pool/vmfoo'>
     <host name='10.0.0.1' port='6789'/>
     <host name='10.0.0.2' port='6789'/>
     ...
   </source>
   <target dev='vda' bus='virtio'/>
   <alias name='virtio-disk0'/>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
 </disk>
  • start the migration process
virsh migrate --live --persistent --copy-storage-all --verbose --xml vmfoo.xml vmfoo qemu+ssh://target_node/system
  • wait until the process finished. The time to wait depends of your cluster performances and your VM size, but there is no interruption of the virtual machine!
  • you're done, your virtual machine is now running over rbd and once checked you can safelly archive or destroy your old disk image.

Notes:

  • of course you have to use libvirt/kvm with rbd support on target node
  • you have to use a recent version of kvm, we had memory exhaustion problems on the hypervisor during the migration process with debian wheezy version

Notes

[1] http://ceph.com/

[2] http://libvirt.org/

[3] http://www.linux-kvm.org/

by Emmanuel Lacour at September 27, 2013 01:03 PM

How to self published your code with Git over http

Introduction

Today i want to publish my scripts. Few days ago, I decided to use Git to release them. But it's only visible by me on my servers. So i decided to use Viewgit, a web interface in php. It's cool! Now, i can see my scripts with my browser! But in fact, I'm unhappy because nobody is able to use git mechanism like “git clone”. So, I want to use “git over http”, with git-http-backend.

For this environment, I use Nginx web server over Debian to serve files.

Viewgit

The installation of viewgit is pretty easy, just download, untar and play. You must drop your git projects, in “projects” directory, like me :

/var/www/viewgit/projects

And declare your projects in /var/www/viewgit/inc/localconfig.php

Your nginx config looks like this at this time :

vi /etc/nginx/sites-available/viewgit
server {
listen 10.0.0.6:80;
root /var/www/viewgit;
index index.php;
server_name git.d2france.fr;
location ~ \.php$ {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass 127.0.0.1:9001;
}

Git over http

Before using git over http, you need to know two fundamentals. First, you want to allow people to download your projects, and second, you want to allow people to make modifications on your projects.

To play around git clone, fetch and pull requests, git uses http.uploadpack service.

To play around git push, git uses http.receivepack service.

To provide those services, your need to use GIT-HTTP-BACKEND as a backend cgi script for your web server and nginx cgi server (fcgiwrap) to run it.

apt-get install git-http-backend fcgiwrap

With Nginx, the configuration could be like this :

server {
listen 10.0.0.6:80;
root /var/www/viewgit;
index index.php;
server_name git.d2france.fr;
location ~ \.php$ {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass 127.0.0.1:9001;
}
location ~ ^projects/.*/(HEAD|info/refs|objects/info/.*|git-upload-pack)$ {
root /var/www/viewgit/projects;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME   /usr/lib/git-core/git-http-backend;
fastcgi_param PATH_INFO         $uri;
fastcgi_param GIT_PROJECT_ROOT  /var/www/viewgit/projects;
fastcgi_param GIT_HTTP_EXPORT_ALL "";
fastcgi_pass unix:/var/run/fcgiwrap.socket;
}
}

Here, i just want to share my scripts. So, I only allow git-upload-pack requests


It works!

Now you can clone your git repositories with this command:

 git clone http://server/projects/foobar

As you can see, on each project in viewgit you can't add any information like the url of your git. A friend made a plugin for that. You should find his work at viewgit-projectinfos-plugin.

This article on my blog.

by Félix Defrance at September 27, 2013 10:06 AM

Disable IPv6 autoconfiguration at startup

On a LAN with IPv6 autoconfiguration enabled (using a radvd service for example), it is often needed to set static addresses for servers and so deactivate IPv6 autoconf on them.

With Debian 5.0 at least, it should be as easy as adding:

pre-up sysctl -w net.ipv6.conf.eth0.autoconf=0

in /etc/network/interfaces. But it doesn't works, because unless you set up some IPv6 adresses before in the init process, the ipv6 module is not loaded and so net.ipv6 doesn't exist. To fix this, just explicitely add ipv6 in /etc/modules...

Same things happens if you wan't to disable RA with net.ipv6.conf.IFACE.accept_ra=0

by Emmanuel Lacour at September 27, 2013 10:02 AM

Convert an unsparse vm image to sparse vm image

Convert an unsparse vm image to sparse vm image

Few weeks ago, I needed to convert qcow2 image to raw image. So, I executed this command:

qemu-img convert -f qcow2 -O raw vm-foo.qcow2 vm-foo.raw

After that, I had an unsparse image because qemu-img don't output sparse file. I saw this by running this command:

qemu-img info vm-foo.img

or

ls -lksh vm-foo.img

So now, I want to convert this new vm-image to a sparse file, because I want to free space in my file system. As you could know, in sparse file, zero-data, don't takes place in your file system instead of unsparse file.

Moreover, when files are deleted, their data stay in place on the disk (just indexes was deleted).

In my case, i want to optimize my future sparse file vm-image, and I decide to force zero-data in my vm-image.

So, on my started guest, I wrote zero-data as far as possible, using this command:

root@foo# dd if=/dev/zero of=/tmp/zerotxt bs=1M
root@foo# sync
root@foo# rm /tmp/zerotxt

Now, I shutdown the guest, and convert unsparse file to sparsed file by using cp command:

cp --sparse=always vm-foo.raw vm-foo.raw-sparse

Well done, I got a clean sparse file!

  qemu-img info vm-foo.raw-sparse
image: vm-foo.raw-sparse
file format: raw
virtual size: 40G (42949672960 bytes)
disk size: 6.3G
This article on my blog.

by Félix Defrance at September 27, 2013 10:01 AM

September 13, 2013

Easter-eggs

Superviser la synchronisation de vos annuaires OpenLDAP

Logo OpenLDAP Cette problématique semble peut-être simple de prime abord, mais une supervision efficace de la synchronisation d'annuaires OpenLDAP n'est pas aussi triviale qu'elle peut en avoir l'air. Toute la complexité réside dans le mécanisme relativement simpliste de réplication avec syncrepl : un schéma LDAP périmé ou des ACL mal définies, peut facilement entraîner une désynchronisation de vos annuaires sans que cela soit très visible.

Le mécanisme de réplication syncrepl se base sur des identifiants de versions des données contenues dans l'annuaire, pour déterminer quelles informations doivent être répliquées et quelles informations est la plus à jour (dans le cas d'une réplication master-master). Ces identifiants de versions sont stockés dans l'attribut contextCSN de la racine de l'annuaire et dans les attributs entryCSN de chacun des objets de l'annuaire. Les valeurs de ces attributs (obligatoirement indexés) sont construites à l'aide de la date de dernière modification. Cela permet, via l'overlay syncrepl d'OpenLDAP, de déterminer à partir du contextCSN d'un répliqua, les objets LDAP de l'annuaire source modifiés depuis et qui devront donc être synchronisés. La réplication d'un objet consiste ensuite à transférer l'objet complet d'un annuaire à l'autre, sans distinction d'attributs : tous les attributs seront répliqués quelle que soit la modification l'ayant entraînée. Ce mécanisme très simple n'est malheureusement pas très robuste et des cas de désynchronisation son relativement fréquents. Une bonne supervision est alors indispensable, d'autant plus qu'une synchronisation en panne n'empêche pas pour autant un répliqua de répondre aux requêtes qui lui sont adressées.

Un plugin de check Nagios (ou Icinga) existait déjà mais il se basait uniquement sur la valeur du contextCSN des annuaires, sans vérification objet par objet, voire attribut par attribut. Il pouvait alors laisser passer une désynchronisation.

J'ai donc eu l'occasion d'en développer un qui, selon moi, aborde plus globalement la supervision d'une réplication syncrepl. Ce plugin ne se contentera donc par simplement de vérifier les valeurs des attributs contextCSN, mais permettra une vérification des objets présents dans chacun des annuaires, de la valeur de leur attribut entryCSN, voire même de la valeur de l'ensemble de leurs attributs. Il est évident qu'une supervision plus exhaustive sera plus coûteuse en terme de ressource, et c'est pourquoi j'ai voulu, à travers différents paramètres, permettre une vérification plus ou moins complète de l'état de synchronisation :

  • --filter : il est possible de ne vérifier qu'une partie des objets de l'annuaire, en spécifiant un filtre de recherche LDAP
  • --no-check-contextCSN : permet de désactiver la vérification du contextCSN des annuaires
  • --attributes : permet d'activer la validation des valeurs de tous les attributs de tous les objets des annuaires

Il est a noter cependant, que la supervision la plus complète sur un annuaire d'environ 10 000 objets, ne prend que quelques secondes (entre 3 et 10 secondes en fonction de la charge des serveurs).

Pour télécharger ce plugin, c'est ici

Exemple d'utilisation :

check_syncrepl_extended \
     -p ldap://ldap0.example.lan \
     -c ldap://ldap1.example.lan/ \
     -D 'uid=nagios,ou=sysaccounts,o=example' \
     -P 'password' \
     -b 'o=example' -a -n

Définition de la command correspondante dans Nagios :

define command {
        command_name    check_syncrepl
        command_line    /usr/local/lib/nagios/plugins/check_syncrepl_extended -p $ARG1$ -c ldap://$HOSTADDRESS$/ -b $ARG2$ -D '$ARG3$' -P '$ARG4$' -a -n
}

Définition du service correspondant :

define service{
        use                           generic-service
        service_description     LDAP Syncrepl
        check_command         check_syncrepl!ldap://ldap0.example.lan!o=example!uid=nagios,ou=sysaccounts,o=example!password
        host_name                ldap1
}

by Benjamin Renard at September 13, 2013 04:03 PM

Mailt : l'outil indispensable pour tester vos serveurs de mail

mail Lors de la mise en place d'un serveur de mail, à un moment ou un autre, on aura toujours besoin d'envoyer un mail de test pour tester une connexion IMAP ou POP. Pour faciliter tout cela, nous avons mis au point une boite à outils bien pratique, que nous avons nommée Mailt. Elle se compose de trois outils (pour le moment) :

  • smtpt : envoyez un mail en une commande au serveur SMTP de votre choix, en définissant simplement le destinataire, l'expéditeur ou encore le contenu du mail. Fini les connexions telnet manuelle à coup de copier/coller pour simuler les différents cas de figure que vous souhaitez expérimenter ! Cette commande sera d'autant plus pratique si vous souhaitez valider l'efficacité de votre analyse anti-spam et/ou anti-virus :
    • le paramètre --spam vous permettra de facilement adapter le contenu du mail, afin qu'il contienne la chaîne de caractères de test GTUBE qui sera interprété comme un spam par votre anti-spam.
    • le paramètre --virus vous permettra, de la même manière, d'insérer dans le contenu du mail la chaîne de caractères de test EICAR qui sera interprété comme un virus par votre anti-virus.

Vous pourrez également facilement valider une connexion STARTTLS ou SMTPS, authentifiée ou non, et avec le paramètre --debug, ce sera comme si vous tapiez tout cela manuellement dans une session telnet ou openssl s_client.

  • imapt : testez en une commande la connexion à un serveur IMAP de votre choix. Fournissez simplement les informations de connexion (utilisateur, mot de passe, serveur, port, sécurité SSL, dossier INBOX, etc.) et cette commande validera pour vous la réussite d'une connexion, en affichant le nombre de messages contenus dans le dossier de votre choix. Vous pourrez facilement ajuster le niveau de détail de vos tests avec les paramètres --verbose ou --debug
  • popt : testez en une commande la connexion à un serveur POP de votre choix. De la même manière, à partir de paramètres fournis, cette commande validera pour vous la connexion au serveur de votre choix et vous affichera le nombre de mails sur le serveur.

Cette suite d'outils est écrite en Python et utilise des librairies standards, le plus souvent déjà présentes sur vos serveurs (paquet Debian python-support). Elle est facilement installable au travers d'un paquet Debian très léger, téléchargeable ici ou en ajoutant le repos Debian suivant et installer le paquet mailt :

deb http://debian.zionetrix.net/debian/ squeeze mailt

Remarque : Ces paquets sont également disponibles pour la version testing de Debian (Wheezy).

by Benjamin Renard at September 13, 2013 04:02 PM

Dokuwiki : une page d'accès refusé personnalisée

Logo Dokuwiki Lorsqu'un utilisateur accède à votre wiki Dokuwiki alors qu'il n'en a pas le droit, un message l'informe que l'accès à cette page lui est interdit et lui suggère de s'authentifier. Ce message ne correspond pas forcément à la réalité en fonction de l'utilisation que vous faites de votre wiki. Il peut être intéressant alors d'avoir une page personnalisable qui avec vos propres mots, expliquera à l'utilisateur pourquoi il obtient cette page ou encore comment accéder à la page souhaitée.

Dans cette optique, j'ai écrit un plugin Dokuwiki dénommé deniedpage permettant de définir une page de votre wiki sur laquelle l'utilisateur sera redirigé automatiquement en cas d'accès refusé à une page. Vous pourrez définir la page de votre choix en utilisant la page de configuration de la section administration. Comme n'importe quelle autre page de votre wiki, vous pourrez créer, et plus tard modifier cette page, en utilisant l'éditeur en ligne.

Grâce au gestionnaire d’extensions de Dokuwiki, l'installation de ce plugin se fait très simplement en copiant l'URL suivante dans le champ de téléchargement et en cliquant sur le bouton Télécharger :

https://github.com/brenard/dokuwiki-plugin-deniedpage/zipball/master

La mise à jour se fera ensuite tout aussi simplement en utilisant le bouton de Mettre à jour. Penser à activer le plugin après son installation et assurez- vous que votre page d'erreur personnalisée soit accessible à n'importe qui. Pour plus d'information sur ce plugin, vous pouvez consulter sa page sur le site Dokuwiki.org.

by Benjamin Renard at September 13, 2013 04:01 PM

August 08, 2013

Frédéric Péters

GUADEC is over

And I wasn't there, but that has nothing to do with GNOME, just that it conflicted with another important project I had for almost a year, Radio Roulotte and a recurring one, Radio Esperanzah!.

The idea of Radio Roulotte mostly came last year, it was about getting a caravan and two horses, to visit various villages, meet locals and produce a radio show with them. We were a small team to talk about it, and then preparing it, getting new contacts, requesting some money, reshaping the caravan, etc. but it only became real when we met and the horses arrived.

And the days were cut in two parts, travelling in the morning...

Road from Buzet to Soye, July 27th

Road from Buzet to Soye, July 27th

Road from Floreffe to Buzet, July 26th

Road from Floreffe to Buzet, July 26th

In the streets of Floriffoux, July 28th

In the streets of Floriffoux, July 28th

... then assembling the studio, and that meant getting stuff out of the caravan, getting other stuff in, including electrical power, calibrating the satellite dish, etc.

In Soye, July 27th

In Soye, July 27th

All of this to get ready at 6pm to produce one hour of radio, live with locals.

Studio in Floreffe, July 25th

Studio in Floreffe, July 25th

Studio in Soye, July 27th

Studio in Soye, July 27th

Studio in Floriffoux, July 28th

Studio in Floriffoux, July 28th (outside for the last one)

And as quickly as it started the week was over, we said goodbye to some team members, took a day almost off, and started welcoming members of the radio Esperanzah! team. That project is well oiled, it was the 10th time it happened, it's about covering the various parts of the Esperanzah! music festival.

So we went and assembled things again, the studio as well as our work room, the FM transmitter and computers below the stages to record the concerts.

Stagerack below the

Hardware below a stage

Radio schedule on the board

One day schedule on the board

The festival started, and we kept working, presenting the daily programs, interviewing artists and other participants, recording in the alleys...

Esperanzah Camping filling up

Esperanzah Camping filling up

A concert on the stage

A concert on the stage

And for my part, mixing the concerts, so we could broadcast one on the evening and offer them to the artists. For the first time I did it with Ardour 3 (a git snapshot actually, 44fc92c3) and it went beautifully.

Working with Ardour 3

My horizon for three days

As usual I only attended a few concerts, but at least I got to see An Pierlé and Asian Dub Foundation.

So here you are, you now know what I did during your GUADEC. I heard many good things about Brno, let's work now to get 3.10 rocking in Septembre; and see you in Strasbourg for next GUADEC.

by Frédéric Péters at August 08, 2013 01:18 PM

June 26, 2013

Emmanuel Saracco

Compostelle à vélo

Voyage à vélo vers Saint-Jacques-de-Compostelle en suivant la Via Turonensis et le Camino Francés.

June 26, 2013 10:48 AM

May 16, 2013

Emmanuel Saracco

Randonnées à vélo Tours - Parthenay

Quelques jours à vélo pour faire Tours - Parthenay et visiter les alentours.

May 16, 2013 05:13 PM

March 29, 2013

Emmanuel Saracco

Randonnées à vélo Tours - Saumur

Petit week-end à vélo pour un aller-retour Tours - Saumur.

March 29, 2013 07:46 PM

March 28, 2013

Easter-eggs

Dokuwiki : Authentification LDAP + HTTP

Logo Dokuwiki Pour un de nos clients, la problématique suivante s'est posée : afin de supporter l'authentification SSO en place dans son infrastructure, il était nécessaire que Dokuwiki n'authentifie pas directement les utilisateurs mais fasse confiance à l'authentification effectué par Apache. Pour cela, Dokuwiki, avec son système de plugin d'authentification, implémente une solution relativement simple :

  • Dokuwiki (core) récupère les informations d'authentification fournies par Apache et les propose au plugin via la méthode trustExternal()
  • Les plugins ayant implémenté cette méthode, utilisent les informations d'authentification fournies pour authentifier l'utilisateur sans avoir besoin de lui afficher un formulaire.

Si comme nous, vous avez besoin de récupérer par ailleurs des informations sur vos utilisateurs Dokuwiki dans un annuaire LDAP tout en faisant confiance à l'authentification faite par Apache, vous serez heureux d'apprendre que c'est désormais possible. En effet, le plugin d'authentification LDAP de Dokuwiki (authldap) n'implémentait pas jusqu'ici cette méthode trustExternal() et affichait donc systématiquement un formulaire à l'utilisateur, quand bien même celui-ci avait déjà été authentifié par Apache. Nous avons implémenté cette méthode et l'avons proposé au projet Dokuwiki. Pour voir la Pull Request correspondante sur Github, c'est ici

by Benjamin Renard at March 28, 2013 06:23 PM

March 17, 2013

Frédéric Péters

At Home

Long time without any activity here, and more generally less time with computers those past months, even though I visited Lyon for the JDLL in November and I was of course present in Brussels for the FOSDEM and the developer experience hackfest that happened just before. (I didn't write about it but it was totally unexpected for me to find myself there with two other motivated devhelp developers, many thanks to Aleksander and Thomas.)

But I am now finally back in action, installed in my new place, and for the occasion here are some pictures, starting perhaps with the most raw moment, after a few weeks:

Walls and flooring removed, October 2012

Walls and flooring removed, October 2012

I kept my other flat for a few months but had to let it go by December, and by chance my new upstairs neighbour offered me her spare bedroom, as well as her attic to store my boxes (thanks Fleur!). Still it kept for longer than expected and I became quite impatient of settling in my new place, and that finally happened ~10 days ago.

Temporary office space

Temporary office space

The packing boxes left the attic but most of them are still unopened.

Packing boxes moved to the future living room

Packing boxes moved to the future living room

The kitchen is almost done, not shown in the picture: a fridge is still missing.

Kitchen

Black and white kitchen

And I have the most fabulous bathroom.

Bathroom

Tetris bathroom

Thanks to my good friend Macha for the architect work she did, it's that nice because she always had the eyes on the smallest details.

by Frédéric Péters at March 17, 2013 02:06 PM

January 04, 2013

Emmanuel Saracco

Sortie de wbmtranslator 0.7.2

wbmtranslator est un assistant de traduction pour les modules webmin/usermin.

January 04, 2013 03:22 AM

Labs

wbmtranslator 0.7.2 released

* Fixed translation console encoding. * Fixed a bug with core Webmin translations. * Removed Google translation console and replaced it with Reverso. * Save user translation console choice in a cookie. * Updated "webmin-core-modules.data". * Moved *modules.data files to "data/" directory. * Updated wbmtranslator download URL.

by Emmanuel Saracco at January 04, 2013 03:07 AM

November 08, 2012

Emmanuel Saracco

October 30, 2012

Emmanuel Saracco

Sortie de wbmclamav 0.15

wbmclamav est un module webmin pour gérer Clam Antivirus.

October 30, 2012 12:15 PM

Labs

wbmclamav 0.15 released

- Updated for new clamav 0.97.6. Older versions of ClamAV are not supported anymore. - Added the following clamav options: OLE2BlockMacros, ClamukoExcludeUID. - Added the following freshclam option: DatabaseCustomURL, ExtraDatabase. - Fixed bug with recent ClamAV in database update section. - Fixed bug with select/unselect all in quarantine section. - Fixed clamscan report parsing and double-display in directories scan section. - Removed viruspool (dead link) and added F-Secure in viruses search database results. - Fixed a bug with environment variable PATH. - Renamed "virii" to "viruses". - Updated deprecated URLs.

by Emmanuel Saracco at October 30, 2012 11:55 AM

October 03, 2012

Frédéric Péters

bin/recent

For quite some time the access to recent files has been put forward in GNOME, it happened even more so in 3.6 with a "Recent Files" view in Files (née Nautilus), that makes use of a new recent files backend in gvfs.

This is all very nice but my daily activities still involve a lot of command line usage, and I didn't find any way to mark as recents the files I receive via mutt, the text files I create in vim, the pictures I resize with ImageMagick, etc. That always bothered me at the moment I wanted to access those files, but then I just copied a copy of the file to a scratch directory I had bookmarked, and went on with my work.

Until yesterday, as I finally decided to fix that, and quickly put together recent, a command line utility that just puts the file it gets as argument in the recent files list. It's very simple, uses GFile and GtkRecentManager, and the code is located there: recent.c. It's so simple I guess many others wrote something similar, but here you have, perhaps it will be useful.

by Frédéric Péters at October 03, 2012 09:21 AM

September 28, 2012

Frédéric Péters

Releases!

/files/gnome-3.6-release.jpeg

On Wednesday GNOME 3.6 has been released; many thanks to all people involved, this release is definitely a great one. And then today it was my turn to be released, many thanks for the kind words (and phone calls, and visits).

by Frédéric Péters at September 28, 2012 06:23 PM

September 26, 2012

Emmanuel Saracco

Relooking site randonnee-velo.fr

Grosse refonte de mon site de randonnée à vélo ces derniers jours.

September 26, 2012 11:30 PM

August 12, 2012

Frédéric Péters

Esperanzah! 2012

La marche matinale vers le GUADEC

A Coruña, 25 juillet 2012.

À peine atterri du GUADEC, empaqueté un nouveau sac; à peine un détour par Bruxelles, pris un train; à peine arrivé à Floreffe, la vie non-stop, une fantastique équipée, merci toutes, tous.

Mémoires 2012

Mémoires de Radio Esperanzah! 2012

Et puis déjà le retour, à Bruxelles et au travail, entre côtes d'agneau et bord de canal (en hauts), entre zoning industriel et entrepôt désaffecté (en bas), une semaine toute drôle.

Mais pour la terminer, un retour par GNOME, 201ème commit digest et 40 bugs corrigés dans le produit "website".

by Frédéric Péters at August 12, 2012 07:28 PM

August 05, 2012

Emmanuel Saracco

Randonnées à vélo Tours - Rigny-Ussé

Petite balade à vélo de deux jours avec Vanessa pour aller visiter le château de la Belle au Bois Dormant à Rigny-Ussé.

August 05, 2012 06:30 PM

August 03, 2012

Frédéric Péters

GUADEC 2012

GUADEC is now over but I still have a few long days ahead as I packed a new bag as soon as soon as I got home, this time the direction is Floreffe in the belgian countryside to help in the ephemeral radio station we install every year during the Esperanzah! music festival.

Everything is now ready and we still have a few hours before the doors open so here I am, writing my own take on the time spent in A Coruña.

After an uneventful flight and the preregistration event, we at the release team had a meeting during lunch on Thursday where we discussed our usual stuff, and a little bit about the report we would give during the foundation AGM. It was planned to be the usual report, "we released 3.2 and 3.4, they were on time, we also did this and that…" but we changed it after the many discussions we all had, about Xan and Juan José "a bright future for GNOME" talk, about Benjamin "staring into the abyss" blog post. It seemed like the air was full of expectations regarding the release team, often beyond our actual attributions.

That's why, to go past the informal discussions, we changed our AGM presentation to bring the question upfront, to all foundation members: "what do you expect of the release team?". I am quite happy about the way it went, many opinions were heard during the AGM, even if we had to stop the discussion to leave room to other teams and members.

But we continued gathering feedback and it all fueled the "GNOME OS" BoF on Monday. While the morning was hept on technical grounds (application sandboxing, OSTree…), the afternoon turned into a very interesting conversation on priorities and targets. From form factors we should target (laptops but taking touch into consideration) to core applications, I believe we laid out the foundations for a solid planning. It will now have to be discussed further, and with community members that couldn't attend GUADEC and that BoF. More on that soon, on desktop devel list.

And GUADEC was then fast over, as a all it felt like we started with many doubts but shaked them out and ended with strenghtened confidence in the project, something that will be necessary to keep it going strong.

Others have written about the food, the parties, the games, and all other community bonding moments, I won't repeat team, I share their sentiments, it has been a great event, and I give all my thanks to the organizers and local team, and of course to the foundation who sponsored me, thank you all!

/files/sponsored-badge-shadow.png

by Frédéric Péters at August 03, 2012 07:44 AM

July 04, 2012

Emmanuel Saracco

Irlande à vélo

Mise en ligne du journal de bord de mon tour en Irlande à vélo.

July 04, 2012 01:47 PM

June 26, 2012

Emmanuel Saracco

Tour de Vendée à vélo

Mise en ligne du journal de bord de mon tour de Vendée à vélo avec mon père.

June 26, 2012 08:41 PM

May 09, 2012

Frédéric Péters

Engaging the path to 3.6

Long time no post, I should really have had something up for 3.4.0, and that was of course the plan but on short notice I went to Vientiane, Laos, for $dayjob, the week of the release. Timezone difference, internet connection at the hostel, and a busy schedule made it quite impossible to participate in the release. I didn't stay long but I had a really good time there, thanks again Chanesakhone and Jean Christophe.

a temple in Vientiane

Vientiane, Laos, March 25th 2012

Then, back in Brussels I had to spend time on my new apartement, various administrative tasks, arranging things for gas, electricity and water, keeping an eye on the roof where workers had to knock a chimney down, discussing plans with an architect friend, and so on.

Keys for my new apartment

Keys for my new apartment (and a Collabora bottle opener)

Things finally settled down and two releases have now consecutively been done; 3.4.1 brought some important fixes, and improvements in accessibility, translations and documentation; then last week 3.5 opened the path to 3.6. A new adventure begins…

by Frédéric Péters at May 09, 2012 03:05 PM

April 13, 2012

Emmanuel Saracco

February 24, 2012

Frédéric Péters

Dunkerque 2012

Top départ à 10h17, on se voit, dernières marches de l'escalator, on se dira bonjour plus tard, on sprinte ! 10h18, heure officielle de départ du train. 10h19, arrivée sur le quai, il est toujours là, on saute dedans. On souffle.

Lille. Puis Dunkerque. Retrouver et rencontrer des personnes, se perdre dans les habits, tenter mille combinaisons, se maquiller, sortir.

Carnaval.

Lignes. Chahuts. Harengs. Rigodon.

Le lendemain se balader, retraverser la frontière, La Panne, saluer tout le monde, rater un train, courir à Gand, arriver à Bruxelles, passer encore du temps ensemble, attraper le dernier métro.

Moments qui seront chéris. N.

Une robe, des babeluttes

by Frédéric Péters at February 24, 2012 01:32 PM

January 31, 2012

Frédéric Péters

Commit Digests

After several months on hiatus, then some January evenings to process the backlog, I am happy to have the commit digests back to the present day.

What now? I'll try to get back to the weekly updates, whatever the weather.

Of course you can help; whenever you see a noteworthy commit, whenever you make a noteworthy commit, just send me an email, or ping me on IRC, this will help me, and could also bring other perspectives on what constitutes a “noteworthy” commit. And if you love the commit digests, if you have time on your hands, you can help extending the projects to new heights, got an interest in statistics? got an interest in interviews? there's a place for you.

Happy reading!

by Frédéric Péters at January 31, 2012 07:41 PM

January 30, 2012

Frédéric Péters

January 06, 2012

Frédéric Péters

2012

À trainer à gauche à droite j'ai forcément lu quelques bilans 2011, je passe mon tour (dyslexique j'aurais écrit « je passe mon trou » et ça nous aurait fait rire un peu) mais je m'étais dit que dans mon billet sur Montréal (ses gens, ses bars, la casa del popolo) j'aurais une place pour les lectures, et ce billet, il n'arrivera pas, tant pis pour les photos, mais quand même, bilans 2011, listes de lecture, prétexte et phrase trop longue.

Capitalisme, désir et servitude, de Frédéric Lordon. Énorme. Oublié dans l'avion au retour, à nouveau acheté lors des emplettes de Noël pour en lire les dernières pages. Sous-titré « Marx et Spinoza ». Et de ce dernier, la mise en exergue de cette phrase, « Par réalité et par perfection, j'entends la même chose », qui est quand même la phrase illico recopiée lors de ma lecture de l'Éthique…

Un livre, beaucoup(?) d'autres, et après avoir lu De onze à douze je me suis motivé à enfin faire l'inventaire de ma bibliothèque (en sous-texte il y a la perspective d'un déménagement…), goodreads, quelques soirées d'encodage, bien sûr ponctuées d'écroulages de piles, mais en sortie, enfin, un tableau. Et un tas de statistiques amusantes à faire. Une autre fois.

Et pour finir dans le sous-texte, première lecture de l'année.

by Frédéric Péters at January 06, 2012 12:57 PM

November 22, 2011

Frédéric Péters

C'était

C'était ce week-end, journées du logiciel libre à Lyon, c'était le week-end dernier, Ubuntu Party à Paris, c'était il y a déjà plus d'un mois, Montréal, c'était il y a, oh, cinq mois… déjà.

C'était, calibré, rythme de six mois, Ubuntu Party à Paris; c'était, rythme d'un an, JDLL l'année dernière, et voilà donc qu'il y a maintenant plus d'un an j'étais ainsi, assis, sur les pentes d'une croix rousse, à discuter, à ne pas savoir. Entre… et …

Pour, aujourd'hui, une question en moins. Mais je ne sais quoi.

by Frédéric Péters at November 22, 2011 01:30 AM

November 04, 2011

Emmanuel Saracco

October 11, 2011

Frédéric Péters

Montreal Summit 2011

The date came late, and it was definitely not at the best time wrt some projets at work, but I decided to go nevertheless, and have to give my thanks to the GNOME Foundation, and the travel committee, for quickly accepting when I asked for sponsorship.

Probably because of the short notice it felt like some important teams didn't have enough representation, and while this gave ample place for some topics (building gnome!) I wish we had enough teams for a roundup of the different aspects of GNOME. On the positive side this wide cooperation is happening in the mailing list discussion on freezes, with translators, documentation team, release team and other interested parties.

Still, back to Montreal and the summit, I spent much of the first day testing and reviewing jhbuild patches, and wrapping the day with the presentation of Baserock by Lars Wirzenius. The second day was more diverse, and more intense, with (I heard) a very interesting discussion on GNOME strategy (Tiffany wrote about it in details) that happened at the same time as a jhbuild (and more) session lead by Colin, and later in the afternoon a good serie of questions asked by Xan about our (lack of a clear) developer platform.

Colin on JHBuild

Colin on JHBuild

And the Collabora party, of course.

Then on Monday, more patch reviews, including (at last) Bug 654872 - Delete no longer shipped files at install time but the day was short as many people had to leave early, so it ended with random hacking and bug filing, with the good luck of hitting a bug in glib-networking with Nicolas Dufresne sitting just behind.

All in all this was my first summit and it went well, it would sure benefit from some earlier planning (both dates, and sessions), but this was a nice chance to see new heads (and known heads, of course), especially as I was not in Berlin this summer.

by Frédéric Péters at October 11, 2011 11:35 PM

Six mois

Six mois, presque, depuis les derniers messages, du temps, des lieux, pas de photos, des quenelles, et pour encore quelques jours, le Québec.

Vue depuis le Mont Royal, avec des feuilles

Montréal, 5 octobre 2011

by Frédéric Péters at October 11, 2011 11:02 PM

September 23, 2011

Emmanuel Saracco

September 07, 2011

Emmanuel Saracco

July 27, 2011

Emmanuel Saracco

Voyage en Grèce

Mise en ligne des photos de mon voyage en Grèce (sans vélo cette fois-ci ;-) ).

July 27, 2011 11:04 AM

July 15, 2011

Benjamin Dauvergne

python-oath v0.9

La bibliothèque a fait son entrée sur pypi en version 0.9, le code pour HOTP et TOTP est complet et comprend une suite de test reprenant les vecteurs de test des RFC.

Une nouvelle spécification a fait son entrée: OCRA ou RFC-6287. OCRA ne définit pas un mais toute une suite d’algorithmes d’authentification simple ou mutuelle ainsi que de signature, à base de challenge sur les valeurs renvoyées par la fonction de hachage défini par HOTP. Chaque algorithme est défini par une chaine comme par exemple:

OCRA-1:HOTP-SHA256-6:QN08

Cette chaîne se lit comme il suit:

  • selon la syntaxe V1 des spécifications OCRA: OCRA-1
  • utilisant l’agorithme HMAC/HOTP, via la fonction de hachage SHA256 et un résultat décimal à six caractères: HOTP-SHA256-6,
  • en réponse à un challenge de type numérique de au maximum 8 chiffres: QN08.

C’est prévu pour couvrir un grand nombre de besoins et en même temps permettre à plusieurs implémentations d’être facilement interopérable, le format de description permettant de vérifier et configurer l’implémentation.

La bibliothèque inclut le code d’analyse de ces chaînes ainsi que le code de calcul des condensés (hash). La prochaine étape est l’importation des vecteurs de test depuis la RFC et l’écriture de tests unitaires les utilisant. Cela devrait donner la version 1.0.

by admin at July 15, 2011 02:09 PM

June 16, 2011

Emmanuel Saracco

May 24, 2011

Emmanuel Saracco

April 25, 2011

Emmanuel Saracco

April 19, 2011

Frédéric Péters

Rejuvenating your release team

Vincent is taking his release team hat off and dropped it on my head. I am a bit sad because the real blue hat has been lost, but I am very happy to be here at this time, GNOME 3 is out, people loves it.

For 3.2 we will continue to have our work driven by design, and we are making adjustments to our schedule and processes to keep on going with a global vision, there have been a few emails about feature planning on desktop-devel-list, we will expand on that soon, but for now, I wanted this post to give all my thanks to Vincent Untz (plenty of time for icecreams now), Lucas Rocha (don't forget to add ajax support to the board), and Frédéric Crozat (we will continue harassing you for live usb images), who are leaving the team, and to welcome our new members,

  • Luca Ferreti, he was a team member already but Vincent gave him a trainee badge as no one was leaving at that time; he has already been helping with releases;
  • Javier Jardón, he arrived on #gnome-love someday, got hooked fixing build failures and went on to lead wide goals to improve our modules, and more;
  • Alejandro Piñeiro Iglesias, I met him in the build brigade, but really he is now an accessibility guy, and his expertise in the domain will be immensely valuable;
  • Colin Walters, shell developer, involved with gobject introspection from the beginning, his latest feat has been to push for a standalone spidermonkey release from our friends in Mozilla.

Let's now go to 3.2, and beyond!

by Frédéric Péters at April 19, 2011 08:21 AM

April 11, 2011

Emmanuel Saracco

April 07, 2011

Frédéric Péters

GNOME 3.0

With no consideration for Bangalore timezone or my sleep schedule, GNOME 3.0 has now been released! Live images are already updated (go try them) and packages are flowing into distributions.

/files/iamgnome.png

I originally had plans for some tourism in Bangalore after GNOME.Asia (fanstastic event) but didn't do much in the end as everyone was working hard on the release, and I certainly didn't want to let it happen without my part of the effort.

And that effort has been concentrated on the documentation websites, library.gnome.org, updated to the new website look, and developer.gnome.org, a revived site dedicated to developers, working on and with GNOME technologies. It couldn't have happened without the Berlin Development and Documentation Tools hackfest, and the collective effort of numerous hackers, inspired by the immense work Shaun McCance has been doing for years.

Thanks everyone for making it happen, and let's all step into the GNOME 3 era.

by Frédéric Péters at April 07, 2011 06:58 AM

April 01, 2011

Frédéric Péters

Bad news for Mozilla embedders

The Heise Online just published an article, Mozilla kills embedding support for Gecko layout engine, without making any fuss it starts with "Mozilla has officially ended support for embedding the Gecko layout engine in applications other than Mozilla core applications", then it links to Benjamin Smedberg post on mozilla.dev.embedding but it doesn't offer much details, or reasons (other than "our product is firefox, we have to focus ressources there.").

This article ends with an open question, about applications that are currently using Gecko, but it erroneously cites Devhelp. Devhelp has been ported to Webkit a long time ago (I had a post title "Devhelp with Webkit back in 2007).

No worries for Devhelp then; but while it talks about Gecko only this decision may be of concern to us, if it was extended to Mozilla Javascript engine (SpiderMonkey); a few months ago Colin Walters was actually quite positive ("Actually we're discussing this upstream again very productively; there's renewed interest in supporting embedders, and I'm in the process of getting some patches in to help here.") but who knows... Mozilla certainly keeps on ignoring some of our needs, I can't count the number of times jhbuild had to be updated because a xulrunner tarball was removed from their mirrors. (last time? two days ago, bug 645971).

by Frédéric Péters at April 01, 2011 01:44 AM

March 31, 2011

Frédéric Péters

199 / 199

Fourth day of the Bangalore release hackfest and things are going smoothly; after the Intel offices in the beginning of the week (thanks Intel!) we are now at the GNOME.Asia summit venue, the Dayananda Sagar Institutions, still hard working at "release team" stuff, and more.

Today, between testing and approving patches (go read the On the road to GNOME 3.0 post of Olav for details) and work on the future library.gnome.org and developer.gnome.org, I managed to reach the mythical "100% building" status on my build slave on build.gnome.org, 199 out of 199 modules built correctly, at the same time.

Good sign for the forthcoming release!

by Frédéric Péters at March 31, 2011 11:58 AM

March 30, 2011

Labs

Server upgraded from Debian Lenny to Debian Squeeze

The server hosting this forge has just been upgraded to Debian Squeeze. Tell us if you found any new problem.

by Emmanuel Lacour at March 30, 2011 04:00 PM

March 28, 2011

Frédéric Péters

Bangalore Release Hackfest

First day of the hackfest and Allan Day wrote we would do release team things, but what would those "release team things" be? Many things!

Just go and see all the things Andre did today, or the gnome-panel branch of Vincent (299 files changed, 17334 insertions(+), 27275 deletions(-)).

"Now let me break stuff", "3.0-freeze-break", noticed the pattern? But don't take it seriously, we are making 3.0 rock, together with the release team members that couldn't join us and are also working hard.

What about me, some details about what I did? I went for boring things, like pushing a preliminary set of modules for our second release candidate, which was initially planned for today, then building and smoketesting it, including the gnome-panel changes mentioned above. For the record that release will finally happen tomorrow, and I blame the timezone for this.

That's it, you should now imagine a "Sponsored by the GNOME Foundation" badge here (thanks!), and a "meet me at GNOME.Asia summit 2011" image on the other side; and go over to read what the other participants did, they've done so much already.

by Frédéric Péters at March 28, 2011 06:51 PM

March 13, 2011

Frédéric Péters

Libnotify Adoption

This is a quiet Sunday, perfect to spend time reading blogs, and doing so made me want to look in Bugzilla if requests for supporting appindicators had been mistreated, but the first bug report I read was about Empathy, and I read the following comment:

« My understanding of the GNOME release process is that the release team prefers to see a library used as a configure time option of several projects before accepting it as a dependency. This removes the "I built a library that sounds good but no one really uses" problem of adding libraries to the platform. For instance, libnotify has been used by many programs before it was a blessed dependency for 2.26. »

—Ted Gould, in bug 574744, "Empathy could take advantage of the Messaging Indicator", 2009-04-06.

So I was derailed and went looking for the case of libnotify, which I had forgotten, and indeed, libnotify was proposed for 2.20, and refused,

+ libnotify
  - mixed feelings within the release team.
  - this is being used more and more, so it's pretty clear
    there's a demand for this.
  - some of us still strongly feel that it should live in one
    of the libraries we already have in our stack (probably
    GTK+). We'd like to see people working on integrating
    this in our stack, or explaining why it's not possible to
    do so.
  - what might be worth is accepting the dbus API: this is
    something that will happen more and more in the future
    (think Xesam, for example). The API probably needs to be
    standardized, though.

— Vincent Untz, in New module decisions for 2.20, 2007-08-10

And it was finally in 2.26 that is was approved, and it was still noted at that time that it would be better to have it in GTK+.

+ libnotify (external dependency)
 - widely used
 - would be nice to have a more active development
 - feature that should live in GTK+ in the future (when dbus
   can be used there)
 => approved
   The release team wants to stress out that it should really
   not be abused (as it tends to be).

— Vincent Untz, in New module decisions for 2.26, 2009-01-21

For reference I found this list of requests for appindicator support: Empathy, Ekiga, Epiphany, Vino, and GNOME Control Center, in case someone wants to look for mistreatments.

by Frédéric Péters at March 13, 2011 04:16 PM