People have donated € 0 to make the world a better place! See how you can help :)

December 06, 2016

Last Friday was the 6 month anniversary of Nextcloud, a good opportunity to look back and reflect on what we have achieved since we started. I also have some interesting news to share, including that Nextcloud GmbH is a profitable company already!

Half a year ago a most of the ownCloud engineering team including myself started the Nextcloud project. Our goal was to take lessons from the past and create a next generation open source project with a better, more stable company behind it. Those were very ambitious goals but I’m happy to report that things have worked out better then what I was hoping for!

With Nextcloud we wanted to create a more sustainable ecosystem with the right balance between stakeholders’ needs, both technical and business-wise. When done right, both the project and the customers win. I tried this when I started ownCloud but unfortunately I wasn’t successful.

The good news is that it is working very well this time! Nextcloud has gotten a huge community traction as well as massive commercial interest. Let me cover the different areas in more detail.


Nextcloud has become the most active project in our space and we’re still growing fast.
This is because we do a few things right:

  • Nextcloud does not assign copyright to a single commercial, for-profit entity (Contributor License Agreement). So contributors don’t have to give up the rights on their code and transfer it to another entity. The way Nextcloud handles this is similar to a lot of other open source projects, creating a fair and level playing field for customers, partners and other participants. Obviously, this is appreciated.
  • We decided against an open-core business model, making Nextcloud is 100% open source and free software without proprietary enterprise features. As Gartner points out, open core is at best a marketing tool: customers get none of the benefits of open source and contributors don’t get ownership over the project they built together. With Nextcloud, customers have legal certainty and no vendor lock in while contributors are equal members of the project they participate in. Again, something our customers and contributors love.
    See also our blog on why AGPL is great for business users.
  • Our hackatons and company meetings are open for people from the outside. We already organized four hackatons in the last 6 months with a significant number of community contributors and partners attending.
  • We organized our first Nextcloud conference hosted by the TU-Berlin in Berlin. To be honest this was an ambitious project for such a young community. But it turned out really well with more attendees than expected. Great keynote speakers and nice press coverage. That includes the launch of the Nextcloud Box we did there which was a huge success.
  • We revived our Meetup culture. We have now regular meetups in more cities than ever before. I especially want to mention the new meetups in Stuttgart, Frankfurt and Tirana!
  • The strong focus on open collaboration is also super attractive for partners and customers. This is still not normal for most software companies.

And does this all work? Yes it does. Look at the community activity statistics. We are are already the most active project in our space and still growing fast. I’m so happy that Nextcloud is back on track as a real open source project, similarly structured to what I learned during my time at the KDE community!


The next big area that I want to cover is Nextcloud as a product itself.

  • We created a completely new brand including name, Logo, CI, and so on. And it is a great brand, fresh and relevant, and people love it!
  • We released Nextcloud 9 well ahead of schedule, the first though limited milestone in our history.
  • With Nextcloud 10 came significant improvements all over the board. The goal was to release all the enterprise features as open source but it also included a lot of new features that are not available elsewhere.
  • I’m happy to announce that we will release Nextcloud 11 this month. I don’t want to spoil the surprise here but it will be the biggest and most important release so far with big improvements in speed, reliability and very significant features.
  • More information will be available later this month. Please everyone, help test the beta which is already released. It’s 100% open source of course!

I’m happy that Nextcloud is fully AGPL again, the license that I picked at the beginning when I founded ownCloud. This gives everyone legal clarity and guarantees real benefits and freedom to all users and contributors. We don’t mix potentially incompatible licenses which might become a legal minefield. We are committed to protect and defend this license also for our contributors if needed.


6 month ago we also founded a new company called Nextcloud GmbH. The idea was to learn from the past and make this a real sustainable company. It is build to provide a long term home for core developers and a guarantee that the product will be developed and maintained for a long time. Everything at Nextcloud GmbH is build to be a sustainable business. Nextcloud Gmbh doesn’t exist to be sold and it isn’t designed and optimized for an exit. We are completely self funded and we don’t depend on any external investors. This gives us an amount of freedom to do the right thing for the project, company and the people that we never had in the past.

Transparency is key. We do everything in the open. The only exceptions are customer data and legal topics. We even develop our main website in github and we get very significant external contributions from the community. If you’ve seen how our website has evolved and then see who is doing the work, you’ll be looking at another example of the strength of community. I don’t know a lot of companies that have that level of openness!

To really benefit from what the open source model has to offer, we decided not to just be open core for marketing purposes but follow the 100% open source model successful companies like RedHat, SUSE and others are leading with.

Our customers notice and appreciate this, as feedback we got from partners and customers shows. We received significant contributions in both code and other input. This is because the customer or partner knows their work won’t end up in a proprietary product which then makes them pay for their own work later on. Customers also like that they don’t have a lock-in in Nextcloud because it is completely open source. They pay for our excellent expert support and services, like with other real open source companies, and we constantly have to prove our value. And we do, seeing how business is going!

Business wise, confidence in your business model pays off. We are already well over 20 employees and that is only counting full time employees, not partners or freelancers.

The amount of customer interest we get is unbelievable, we still have a hard time processing all the incoming requests and sending out quotes and contracts quick enough. And yes, we’re looking for help!  People clearly really like what we are doing.

The big news that I want to announce today is that as of last week we already reached profitability. This is crazy after only 6 months, long before we planned. You might remember that we secured initial funding for three years, which means we will be able to continue to pursue an aggressive growth strategy, investing more in nextcloud and customer satisfaction while maintaining a sustainable business over the coming years.

Investing means hiring and we have a big number of job openings – if you’d like to work for an awesome, innovative, open, young and very healthy company, send your resume!

We’re also looking for partners who want to help bring Nextcloud to an even wider user base – you can use the contact form to talk to us about this.


So what is next? I’m really happy that everyone in the Nextcloud community shares the same vision and idea. We want to enable our users to secure their data, protect their privacy and fix their data handling and communication problems. And that is exactly what we will keep working on, double speed!

Join the Nextcloud community for the 6 month and the next 10 years: make a difference


December 05, 2016

Nos acercamos a la navidad, temporada de regalos y supuesta felicidad, así que el equipo de desarrollo de KDE no ha querido ser menos y se ha marcado el lanzamiento de su gran revisión de sus aplicaciones para diciembre. Pero antes de este gran actualización siempre hay que probar. Por eso me complace anunciar que ha sido lanzada la versión candidata de KDE Aplicaciones 16.12 ¡Esto no para! ¡KDE Rocks!

Lanzada la versión candidata de KDE Aplicaciones 16.12


El pasado 2 de diciembre el equipo de desarrollo de la Comunidad KDE anunció la versión candidata de KDE Aplicaciones 16.12, otro paso más en la evolución de su ecosistema de programas que tiene dos objetivos fundamentales: seguir mejorando las aplicaciones KDE y continuar la migración de más aplicaciones al entorno de trabajo Qt5/KF5.

Tras un trabajo que se inició el mismo día que se lanzó KDE Aplicaciones 16.04, los desarrolladores han estado trabajando de forma silenciosa pero coordinada y constante preparando las nuevas funcionalidades que nos esperan en agosto

Ahora es el momento de congelar las funcionalidades y las dependencias, y que el equipo de desarrollo (y todas aquellas personas que así lo deseen) se centren en corregir errores y pulir las aplicaciones.

Más información:

Pruébalo y reporta errores

Lanzada la versión candidata de KDE Aplicaciones 16.12Todas las tareas dentro del mundo del Software Libre son importantes: desarrollar, traducir, empaquetar, diseñar, promocionar, etc. Pero hay una que se suele pasar por alto y de la que solo nos acordamos cuando las cosas no nos funcionan como debería: buscar errores.

Desde el blog te animo a que tú seas una de las personas responsables del éxito del nuevo lanzamiento de las aplicaciones de KDE. Para ello debes participar en la tarea de buscar y reportar errores, algo básico para que los desarrolladores los solucionen para que el despegue de KDE Aplicaciones 16.04 esté bien pulido. Debéis pensar que en muchas ocasiones los errores existen porque no le han aparecido al grupo de desarrolladores ya que no se han dado las circunstancias para que lo hagan.

Para ello debes instalarte esta beta y comunicar los errores que salgan en, tal y como expliqué en su día en esta entrada del blog.

In the previous post on Snapping KDE Applications we looked at the high-level implication and use of the KDE Frameworks 5 content snap to snapcraft snap bundles for binary distribution. Today I want to get a bit more technical and look at the actual building and inner workings of the content snap itself.

The KDE Frameworks 5 snap is a content snap. Content snaps are really just ordinary snaps that define a content interface. Namely, they expose part or all of their file tree for use by another snap but otherwise can be regular snaps and have their own applications etc.

KDE Frameworks 5’s snap is special in terms of size and scope. The whole set of KDE Frameworks 5, combined with Qt 5, combined with a large chunk of the graphic stack that is not part of the ubuntu-core snap. All in all just for the Qt5 and KF5 parts we are talking about close to 100 distinct source tarballs that need building to compose the full frameworks stack. KDE is in the fortunate position of already having builds of all these available through KDE neon. This allows us to simply repack existing work into the content snap. This is for the most part just as good as doing everything from scratch, but has the advantage of saving both maintenance effort and build resources.

I do love automation, so the content snap is built by some rather stringy proof of concept code that automatically translates the needed sources into a working snapcraft.yaml that repacks the relevant KDE neon debs into the content snap.

Looking at this snapcraft.yaml we’ll find some fancy stuff.

After the regular snap attributes the actual content-interface is defined. It’s fairly straight forward and simply exposes the entire snap tree as kde-frameworks-5-all content. This is then used on the application snap side to find a suitable content snap so it can access the exposed content (i.e. in our case the entire file tree).

        content: kde-frameworks-5-all
        interface: content
        - "."

The parts of the snap itself are where the most interesting things happen. To make things easier to read and follow I’ll only show the relevant excerpts.

The content snap consists of the following parts: kf5, kf5-dev, breeze, plasma-integration.

The kf5 part is the meat of the snap. It tells snapcraft to stage the binary runtime packages of KDE Frameworks 5 and Qt 5. This effectively makes snapcraft pack the named debs along with necessary dependencies into our snap.

        plugin: nil
          - libkf5coreaddons5

The kf5-dev part looks almost like the kf5 part but has entirely different functionality. Instead of staging the runtime packages it stages the buildtime packages (i.e. the -dev packages). It additionally has a tricky snap rule which excludes everything from actually ending up in the snap. This is a very cool tricky, this effectively means that the buildtime packages will be in the stage and we can build other parts against them, but we won’t have any of them end up in the final snap. After all, they would be entirely useless there.

          - kf5
        plugin: nil
          - libkf5coreaddons-dev
          - "-*"

Besides those two we also build two runtime integration parts entirely from scratch breeze and plasma-integration. They aren’t actually needed, but ensure sane functionality in terms of icon theme selection etc. These are ordinary build parts that simply rely on the kf5 and kf5-dev parts to provide the necessary dependencies.

An important question to ask here is how one is meant to build against this now. There is this kf5-dev part, but it does not end up in the final snap where it would be entirely useless anyway as snaps are not used at buildtime. The answer lies in one of the rigging scripts around this. In the snapcraft.yaml we configured the kf5-dev part to stage packages but then excluded everything from being snapped. However, knowing how snapcraft actually goes about its business we can “abuse” its inner workings to make use of the part after all. Before the actual snap is created snapcraft “primes” the snap, this effectively means that all installed trees (i.e. the stages) are combined into one tree (i.e. the primed tree), the exclusion rule of the kf5-dev part is then applied on this tree. Or in other words: the primed tree is the snap before exclusion was applied. Meaning the primed tree is everything from all parts, including the development headers and CMake configs. We pack this tree in a development tarball which we then use on the application side to stage a development environment for the KDE Frameworks 5 snap.

Specifically on the application-side we use a boilerplate part that employs the same trick of stage-everything but snap-nothing to provide the build dependencies while not having anything end up in the final snap.

    plugin: dump
    snap: [-*]

Using the KDE Framworks 5 content snap KDE can create application snaps that are a fraction of the size they would be if they contained all dependencies themselves. While this does give up optimization potential by aggregating requirements in a more central fashion it quickly starts paying off given we are saving upwards of 70 MiB per snap.

Application snaps can of course still add more stuff on top or even override things if needed.

Finally, as we approach the end of the year, we begin the season of giving. What would suit the holidays better than giving to the entire world by supporting KDE with a small donation?

Hoy me complace hacer una entrada sobre un nuevo proyecto colaborativo vinculado al Software Libre pero que no ha nacido dentro de la Comunidad KDE. De la mano de nueve personas comprometidas con el Software libre nace Colaboratorio, el nuevo blog colaborativo GNU/Linux que viene para quedarse en la blogosfera. Es el momento de recibirlo con un fuerte aplauso y de ponerlo en nuestros favoritos.

Nace Colaboratorio, el nuevo blog colaborativo GNU/Linux

Hace algún tiempo leí en un par de artículos en el magnífico blog La Sombra del Helicóptero, que su creador Enrique Bravo quería iniciar el proyecto de crear un blog sobre GNU/Linux colaborativo. Me pareció una gran idea y para mis adentros le deseé toda la suerte del mundo aunque tuve mis dudas ya que esto que parece tan sencillo en realidad no lo es.

No es solo cuestión de convencer a diversos autores a escribir en un solo sitio sino que se necesita algo que en muchas ocasiones no se tiene: tiempo compartido para coordinarse.

Por ello me complace conocer que ha tenido éxito en su iniciativa y que el pasado 1 de diciembre fue anunciado el nacimiento del proyecto Colaboratorio que inicia su andadura con 9 colaboradores y una pequeña avalancha de artículos variados.

Tras leer la entrada de Enrique Bravo en su blog, me dispuse a ver el nuevo blog, dirigiéndome a la Editorial donde se recoge el espíritu del proyecto:

colaboratorio“Colaboratorio es un proyecto colectivo. En él participamos nueve personas hasta el momento, alguna de las cuales seguramente ya conoces. Decimos hasta el momento porque no tenemos inconveniente en que esa cuenta siga subiendo.

El nombre parece hecho a medida aunque llegó a nosotros de forma fortuita; digamos que fue encontrado sin buscarlo. Queríamos que el proyecto contara con colaboración entre personas diversas y experimentación, hacer cosas diferentes a lo visto hasta ahora, lo que le confería un poco la cualidad de laboratorio. Así que el significado de “colaboratorio” nos viene como un guante.

Queremos empezar realizando un despliegue básico e ir creciendo a medida que controlemos lo que ofrecemos. Tenemos algunas sorpresas en hibernación, pero preferimos sacarlas cuando nuestro bloque principal, el blog, esté perfectamente elaborado y con un funcionamiento de reloj.”

Como no podía ser de otra forma, Colaboratorio ya ha publicado un buen número de artículos, la mayoría de presentación que vale la pena leer para conocer a las personas que hay detrás del proyecto.En resumen, nuevo blog sobre GNU/Linux que destaca por ser un proyecto colaborativo potenciado por un veterano del medio y que ya debería estar en vuestra lista de favoritos o en vuestro lector de RSS.

¡Larga vida a Colaboratorio!

December 04, 2016

Aunque Plasma 5 está  bien dotado en cuanto a lanzadores de aplicaciones nunca está de más tener alternativas para personalizar nuestro entorno de trabajo. Por ello os presento Tiled Menu, un lanzador de aplicaciones a lo Windows para Plasma que puede adaptarse a los gustos de ciertos usuarios.

Tiled Menu, una lanzador de aplicaciones a lo Windows para Plasma

Las posibilidades de personalización de Plasma 5 en cuanto a lanzadores de aplicaciones son variadas: el lanzador tradicional, una versión reducida y el lanzador de aplicaciones a pantalla completa.

No obstante, ZREN pensó que todavía le faltaba otro inspirado en Windows. De esta forma ha creado Tiled Menu, un lanzador de aplicaciones que muestra en una columna todas las aplicaciones que tengamos instaladas en nuestro ordenador, primero las más ejecutadas y después ordenadas alfabéticasmente. También nos muestra otra columna donde se encuentran nuestras aplicaciones favoritas utilizando iconos bastante más grandes.

Tiled Menu,

Personalmente, he probado el lanzador Tiled Menu en mi KDE Neon y no me acaba de gustar, pero creo que está muy bien tener alternativas y, además, también creo que el proyecto es muy joven y que con el debido feedback podría convertirse en la cuarta opción de lanzador de aplicaciones que viniese en futuros Plasma.

Y si os gusta, lo que digo siempre: recompensad a los creadores con “me gusta”, “likes” o “+” y no olvidéis compartir.

Más información: KDE Store


¿Qué son los plasmoides?

Para los no iniciados en el blog, quizás la palabra plasmoide le suene un poco rara pero no es mas que el nombre que reciben los widgets para el escritorio Plasma de KDE.

En otras palabras, los plasmoides no son más que pequeñas aplicaciones que puestas sobre el escritorio o sobre una de las barras de tareas del mismo aumentan las funcionalidades del mismo o simplemente lo decoran.

December 03, 2016

In keeping with tradition of LTS aftermaths, the upcoming Plasma 5.9 release – the next feature release after our first Long Term Support Edition – will be packed with lots of goodies to help you get even more productive with Plasma!

Taking a screenshot with an interactive previewTaking a screenshot with an interactive preview

Richer Notifications

Our notification system has stayed virtually the same for the past decade and it shows. Notifications are basically just a bit of text, an icon, and some buttons. They don’t have any semantics, no description of what they’re actually about.

I started a wiki page during Akademy collecting ideas on how to improve notifications in Plasma. The first feature that I implemented is the ability for applications to annotate a notification with a URL (or multiple URLs). The notification service will then show a large preview of said file (or a thumbnail strip in case of multiple files) which can then even be dragged to another window, e.g. to a webbrowser window, an email composer, a chat window, the desktop, anywhere you need it.

This is again in line with our goal for Plasma, allowing you to fully immerse yourself in your current task without ever having to leave the application you’re working with. “Hey, can you send me a screenshot of that thing?” – Meta+Shift+PrtScr, select region, hit return, drag screenshot from notification to chat window, done.

Task Manager Keyboard Shortcuts

Easily number three on the list of most wanted features in Plasma (after Global Menu, scheduled for 5.9, and single Meta key press for opening the launcher, available since 5.8) is the ability to switch between windows and activate launchers using Meta + number keyboard shortcuts.

One of the reasons this hasn’t been implemented in Plasma so far is that we’re infinitely customizable™ and you could have 23 task managers on 3 screens spread across 12 panels. The question is: which panel should own the shortcuts? Should the be spread, and if so, in what order? It’s complicated.

Initially, I tried to take all of this into account, and created a 500+ lines of code patch that allowed you to designate which panel would own the shortcuts, hinting you that “Global shortcuts only work with one Task Manager applet at a time.”, and so on. This just wasn’t maintainable. The new approach is less than 100 lines, very simple, and basically asks the first task manager it finds on a panel on the primary screen (if there is none, it will look on all other panels) to activate the task at the given index.

While this doesn’t give you full flexibility, it implements the majority usecase of having one panel with a task manager and all of that with very little code. It’s always a trade-off between code maintainability and implementing frequently requested features.

December 02, 2016

This is largely based on a presentation I gave a couple of weeks ago. If you are too lazy to read, go watch it instead😉

For 20 years KDE has been building free software for the world. As part of this endeavor, we created a collection of libraries to assist in high-quality C++ software development as well as building highly integrated graphic applications on any operating system. We call them the KDE Frameworks.

With the recent advance of software bundling systems such as Snapcraft and Flatpak, KDE software maintainers are however a bit on the spot. As our software is building on such a vast collection of frameworks and supporting technology, the individual size of a distributable application can be quite abysmal.

When we tried to package our calculator KCalc as a snap bundle, we found that even a relatively simple application like this, makes for a good 70 MiB snap to be in a working state (most of this is the graphical stack required by our underlying C++ framework, Qt).
Since then a lot of effort was put into devising a system that would allow us to more efficiently deal with this. We now have a reasonably suitable solution on the table.

The KDE Frameworks 5 content snap.

A content snap is a special bundle meant to be mounted into other bundles for the purpose of sharing its content. This allows us to share a common core of libraries and other content across all applications, making the individual applications just as big as they need to be. KCalc is only 312 KiB without translations.

The best thing is that beside some boilerplate definitions, the snapcraft.yaml file defining how to snap the application is like a regular snapcraft file.

Let’s look at how this works by example of KAlgebra, a calculator and mathematical function plotter:

Any snapcraft.yaml has some global attributes we’ll want to set for the snap

name: kalgebra
version: 16.08.2
summary: ((TBD))
description: ((TBD))
confinement: strict
grade: devel

We’ll want to define an application as well. This essentially allows snapd to expose and invoke our application properly. For the purpose of content sharing we will use a special start wrapper called kf5-launch that allows us to use the content shared Qt and KDE Frameworks. Except for the actual application/binary name this is fairly boilerplate stuff you can use for pretty much all KDE applications.

    command: kf5-launch kalgebra
      - kde-frameworks-5-plug # content share itself
      - home # give us a dir in the user home
      - x11 # we run with xcb Qt platform for now
      - opengl # Qt/QML uses opengl
      - network # gethotnewstuff needs network IO
      - network-bind # gethotnewstuff needs network IO
      - unity7 # notifications
      - pulseaudio # sound notifications

To access the KDE Frameworks 5 content share we’ll then want to define a plug our application can use to access the content. This is always the same for all applications.

    interface: content
    content: kde-frameworks-5-all
    default-provider: kde-frameworks-5
    target: kf5

Once we got all that out of the way we can move on to actually defining the parts that make up our snap. For the most part parts are build instructions for the application and its dependencies. With content shares there are two boilerplate parts you want to define.

The development tarball is essentially a fully built kde frameworks tree including development headers and cmake configs. The tarball is packed by the same tech that builds the actual content share, so this allows you to build against the correct versions of the latest share.

    plugin: dump
    snap: [-*]

The environment rigging provide the kf5-launch script we previously saw in the application’s definition, we’ll use it to execute the application within a suitable environment. It also gives us the directory for the content share mount point.

    plugin: dump
    snap: [kf5-launch, kf5]

Lastly, we’ll need the actual application part, which simply instructs that it will need the dev part to be staged first and then builds the tarball with boilerplate cmake config flags.

    after: [kde-frameworks-5-dev]
    plugin: cmake
      - "-DCMAKE_BUILD_TYPE=Release"

Putting it all together we get a fairly standard snapcraft.yaml with some additional boilerplate definitions to wire it up with the content share. Please note that the content share is using KDE neon’s Qt and KDE Frameworks builds, so, if you want to try this and need additional build-packages or stage-packages to build a part you’ll want to make sure that KDE neon’s User Edition archive is present in the build environments sources.list deb xenial main. This is going to get a more accessible centralized solution for all of KDE soon™.

name: kalgebra
version: 16.08.2
summary: ((TBD))
description: ((TBD))
confinement: strict
grade: devel

    command: kf5-launch kalgebra
      - kde-frameworks-5-plug # content share itself
      - home # give us a dir in the user home
      - x11 # we run with xcb Qt platform for now
      - opengl # Qt/QML uses opengl
      - network # gethotnewstuff needs network IO
      - network-bind # gethotnewstuff needs network IO
      - unity7 # notifications
      - pulseaudio # sound notifications

    interface: content
    content: kde-frameworks-5-all
    default-provider: kde-frameworks-5
    target: kf5

    plugin: dump
    snap: [-*]
    plugin: dump
    snap: [kf5-launch, kf5]
    after: [kde-frameworks-5-dev]
    plugin: cmake
      - "-DCMAKE_BUILD_TYPE=Release"

Now to install this we’ll need the content snap itself. Here is the content snap. To install it a command like sudo snap install --force-dangerous kde-frameworks-5_*_amd64.snap should get you going. Once that is done one can install the kalgebra snap. If you are a KDE developer and want to publish your snap on the store get in touch with me so we can get you set up.

The kde-frameworks-5 content snap is also available in the edge channel of the Ubuntu store. You can try the games kblocks and ktuberling like so:

sudo snap install --edge kde-frameworks-5
sudo snap install --edge --devmode kblocks
sudo snap install --edge --devmode ktuberling

If you want to be part of making the world a better place, or would like a KDE-themed postcard, please consider donating a penny or two to KDE


December 01, 2016



WikiToLearn1.0 action plan is getting real


Release the new version and start working to improve it : done.

Ok, done! Now let’s start talking about it, spam it, find new users and grow more and more!

Yes, more or less this is the work we are doing in these weeks with our team.

Unimib is funding posters, which we are using to start a new promotional campaign for WikToLearn! Promo team is working on these new info-graphics and you are going to love them. Unimib students, stay tuned and get ready to spot our posters all around you.

We are working hard also from both institutional and more informal contacts: new collaborators are coming. The team is organizing and taking part to new events in the incoming future; stay tuned, more people are going to talk about us and you’ll appreciate our efforts! We are also planning a series of new talks to present the new release and to get more and more people involved in our project.

We are also working on agreements with different universities and institutional centers such as GARR, Imperial College and UCL.

Christmas is coming, if you have ideas to celebrate it with our community contact us! WikiToLearn1.0 is going to celebrate its first XMas 😉

C’mon, new year with the new WikiToLearn is coming: the moment is now!

Share your knoledge, share freedom!



L'articolo Wiki, what’s going on? (Part 18-Making it real) sembra essere il primo su Blogs from WikiToLearn.

KDevelop 5.0.3 released

Today, we are happy to announce the release of KDevelop 5.0.3, the third bugfix and stabilization release for KDevelop 5.0. An upgrade to 5.0.3 is strongly recommended to all users of 5.0.0, 5.0.1 or 5.0.2.

Together with the source code, we again provide a prebuilt one-file-executable for 64-bit Linux, as well as binary installers for 32- and 64-bit Microsoft Windows. You can find them on our download page.

List of notable fixes and improvements since version 5.0.2:

  • Fix a performance issue which would lead to the UI becoming unresponsive when lots of parse jobs were created (BUG: 369374)
  • Fix some behaviour quirks in the documentation view
  • Fix a possible crash on exit (BUG: 369374)
  • Fix tab order in problems view
  • Make the "Forward declare" problem solution assistant only pop up when it makes sense
  • Fix GitHub handling authentication (BUG: 372144)
  • Fix Qt help jumping to the wrong function sometimes
  • Windows: Fix MSVC startup script from not working in some environments
  • kdev-python: fix some small issues in the standard library info

The 5.0.3 source code and signatures can be downloaded from here.

sbrauch Thu, 12/01/2016 - 22:00


November 30, 2016


just a short heads-up that KDevelop is seeking for a new maintainer for the Ruby language support. Miquel Sabaté did an amazing job maintaining the plugin in the recent years, but would like to step down as maintainer because he's lacking time to continue looking after it.

Here's an excerpt from a mail Miquel kindly provided, to make it easier for newcomers to follow-up on his work in kdev-ruby:

As you might know the development of kdev-ruby has stalled and the KDevelop team is looking for developers that want to work with it. The plugin is still considered
experimental and that's because there is still plenty of work to be done. What has been
done so far:

  • The parser is based on the one that can be found on MRI. That being said, it's based on an old version of it so you might want to update it.
  • The DUChain code is mostly done but it's not stable yet, so there's quite some work to be done on this front too.
  • Code completion mostly works but it's quite basic.
  • Ruby on Rails navigation is done and works.

There is a lot of work to be done and I'm honestly skeptical whether this approach will end up working anyways. Because of this skepticism and the fact that I was using another editor, I ended up abandoning the project and thus kdev-ruby was no longer maintained by anyone.

If you feel that you can take the challenge and you want to contribute to kdev-ruby, please reach out to the KDevelop team. They are extremely friendly and will guide you on the process of developing this plugin.

Again, thanks for all your work Miquel, you will be missed!

If you're interested in that kind of KDevelop plugin development, please get in touch with us!

More information about kdev-ruby here:

KDE has been lately been growing quite a bit in repositories, and it's not always easy to tell what needs to be build before, do i build first kdepim-apps-libs or pimcommon?

A few days ago i was puzzled by the same question and realized we have the answer in the dependency-data-* files from the kde-build-metadata repository.

They define what depends on what so what we need to do is just build a graph with those dependencies and get a valid build order from it.

Thankfully python already has a module for graphs and stuff so was not that hard to write.

So say you want to know a valid build order for the stable repositories based on kf5-qt5

Here it is

Note i've been saying *a* valid build order, not *the* valid build order, since there are various orders that are valid since not every repo depends other repos.

Now i wonder, does anyone else find this useful? And if so to which repository do you think i should commit such script?

Akademy, KDE's annual conference, requires a place and team for the year 2017. That's why we are looking for a vibrant, enthusiatic spot in Europe that can host us!

A bit about Akademy

Akademy is KDE's annual get-together where our creativity, productivity and love are at their peak. Developers, users, translators, students, artists, writers - pretty much anyone who has been involved with KDE will join Akademy to participate and learn. Contents will range from keynote speeches and a two-day dual track session of talks by the FOSS community, to workshops and Birds of a Feather (BoF) sessions where we plot the future of the project.

Friday is scheduled for the KDE e.V. General Assembly and a pre-Akademy party/welcoming event. Saturday and Sunday covers the keynotes, talks and lightning talks. The remaining four days are used for BoFs, intensive coding sessions and workshops for smaller groups of 10 to 30 people out of which one day is reserved for a Day Trip of the attendees around the local touristic sights.

Hosting Akademy is a great way to contribute to a movement of global collaboration. You get a chance to host one of the largest FOSS community in the world with contributors from over the world and be a witness to a wonderful inter-cultural fusion of attendees in your home town. You'll also get great exposure to Free Software. It is a great opportunity for the local university students, professors, technology enthusiasts and professionals to try their hand at something new.

What You Need to Do

Akademy requires a location in Europe, with a nice conference venue, that is easy to reach, preferably close to an international airport.

Organizing Akademy is a demanding and a resource intensive task but you’ll be guided along the entire process by people who’ve been doing this since years. Nevertheless, the local team should be prepared to spare a considerable amount of time for this.

For detailed information, please see the Call for Hosts. Questions and applications should be addressed to the Board of KDE e.V. or the Akademy Team. Please indicate your interest in hosting Akademy to the Board of KDE e.V. by December 15th. Full applications will be accepted until 15th January. We look forward to your enthusiasm in being the next host for Akademy 2017!

We are happy to announce the release of Qt Creator 4.2 RC1.

Since the release of the Beta, we’ve been busy with polishing things and fixing bugs. Just to name a few:

  • We fixed that the run button could spuriously stay disabled after parsing QMake projects.
  • Qt Creator is no longer blocked while the iOS Simulator is starting up.
  • We added preliminary support for MSVC2017 (based on its RC).

For an overview of the new features in 4.2 please head over to the Beta release blog post. See our change log for a more detailed view on what has changed.

Get Qt Creator 4.2 RC1

The opensource version is available on the Qt download page, and you find commercially licensed packages on the Qt Account Portal. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on, and on the Qt Creator mailing list.

The post Qt Creator 4.2 RC1 released appeared first on Qt Blog.

Due to illness, a week later than planned, we are still happy to release today the first release candidate for Krita 3.1. There are a number of important bug fixes, and we intend to fix a number of other bugs still in time for the final release.

  • Fix a crash when saving a document that has a vector layer to anything but the native format (regression in beta 3)
  • Fix exporting images using the commandline on Linux
  • Update the OSX QuickLook plugin to use the right thumbnail sizes
  • Improved zoom menu icons
  • Unify colors on all svg icons
  • Fix tilt-elevation brushes to work properly on a rotated or mirrored canvas
  • Improve drawing with the stabilizer enabled
  • Fix isotropic spacing when painting on a mirrored canvas
  • Fix a race condition when saving
  • Fix multi-window usage: the tool options palette would only be available the last openend window, now it’s available everywhere.
  • Fix a number memory leaks
  • Fix selecting the saving location for rendering animations (there are still several bugs in that plugin, though — we’re on it!)
  • Improve rendering speed of the popup color selector

You can find out more about what is going to be new in Krita 3.1 in the release notes. The release notes aren’t finished yet, but take a sneak peek all the same!


Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.


A snap image for the Ubuntu App Store is available in the beta channel.


Source code

November 29, 2016

Many KDAB engineers are part of the Qt Security Team. The purpose of this team is to get notified of security-related issues, and then decide the best course of action for the Qt project.

Most of the time, this implies identifying the problem, creating and submitting a patch through the usual Qt contribution process, waiting for it to be merged in all the relevant branches, and then releasing a notice to the users about the extent of the security issue. We also work together with downstreams, such as our customers, Linux distributions and so on, in order to minimize the risks for Qt users of exposing the security vulnerability.

However, that’s only part of the story. As part of the security team, we can’t simply wait for reports to fall in our laps; we also need to have a proactive approach and constantly review our code base and poke it in order to find problems. For that, we use a variety of tools: the excellent Coverity Scan service; the sanitizers available in GCC and Clang; clazy, maintained by KDAB’s engineer Sérgio Martins; and so on.

Note that all these tools help catch any sorts of bugs, not only the security-related ones. For instance, take a look at the issues found and fixed by looking at the Undefined Behavior Sanitizer’s reports, and the issues fixed by looking at Coverity Scan’s reports.

Today I want to tell you a little more about one of the tools used to test Qt’s code: the American Fuzzy Lop, or AFL to friends.


What is AFL? It’s a fuzzer: a program that keeps changing the input to a test in order to make it crash (or, in general, misbehave). This “mutation” of the input goes on forever — AFL never ends, just keeps finding more stuff, and optimizes its own searching process.

AFL gained a lot of popularity because:

  • it is very fast (it instruments your binaries);
  • it uses state-of-the-art algorithms to mutate the input in ways that maximize the effect on the target program;
  • the setup is immediate;
  • it has a very nice text-based UI.

The results speaks for themselves: AFL has found security issues in all major libraries out there. Therefore, I decided to give it a try on Qt.

The setup

Setting up AFL is straightforward: just download it from its website and run make. That’s it — this will produce a series of executables that will act as a proxy for your compiler, instrumenting the generated binaries with information that AFL will need. So, after this step, we will end up with afl-gcc, afl-g++ and so on.

You can go ahead and build an instrumented Qt. If you’ve never built Qt from source, here’s the relevant documentation. On Unix systems it’s really a matter of running configure with some options, followed by make and optionally make install. The problem at this step is making Qt use AFL’s compilers, not the system ones. This turns out to be very simple, however: just export a few environment variables, pointing them to AFL’s binaries:

export CC=/path/to/afl-gcc
export CXX=/path/to/afl-g++
./configure ...

And that’s it, this will build an instrumented Qt. (A more thorough solution would involve creating a custom mkspec for qmake; this would have the advantage of making the final testscase application also use AFL automatically. For this task, however, I felt it was not worth it.)

Creating a testcase

What you need here is to create a very simple application that takes an input file from the command line (or stdin) and uses it to stress the code paths you want to test.

Now, when looking at a big library like Qt, there are many places where Qt reads untrusted input from the user and tries to parse it: image loading, QML parsing, (binary) JSON parsing, and so on. I decided to give a shot at binary JSON parsing, feeding it with AFL’s mutated input. The testcase I built was straightforward:

#include <QtCore>

int main(int argc, char **argv)
    QCoreApplication app(argc, argv);

    QFile file(app.arguments().at(1));
    if (!
        return 1;

    QJsonDocument jd = QJsonDocument::fromBinaryData(file.readAll());

    return 0;

Together with the testcase, you will also need a few test files to bootstrap AFL’s finding process. These files should be extremely small (ideally, 1-2KB at maximum) to let the fuzzer do its magic. For this, just dump a few interesting files somewhere next to your testcase. I’ve taken random JSON documents, converted them to binary JSON and put the results in a directory.

Running the fuzzer

Once the testcase is ready, you can run it into the fuzzer like this:

afl-fuzz -m memorylimit \
         -t timeoutlimit \
         [master/slave options] \
         -i testcases/ \
         -o findings/ \
         -- ./test @@

A few explanatory remarks:

  • The testcases directory contains your reference input files, while the findings of the fuzzers will be written in findings.
  • To avoid blowing up your system, AFL sets very strict limits for execution of your test: it is allowed to allocate at most memorylimit megabytes of virtual memory and it is allowed to run for at most timeoutlimit milliseconds. You will typically want to raise the memory limit from its default (50MB) to something bigger, depending on your system and on the test.
  • One instance of afl-fuzz is single threaded; in order to maximize the search throughput on a machine with multiple cores/CPUs, you must manually launch it multiple times with the same -i and -o arguments. You should also give each instance a unique name and, if you want, elect one instance to do a deterministic search rather than a random one. This is all expressed through the master/slave options: pass to one instance the -M fuzzername option, and to all the others pass the -S fuzzername option. (All the fuzzernames must be unique).
  • Last but not least, @@ gets replaced by the name of a file generated by AFL, containing the mutated input.

For reference, I’ve launched my master like this:

afl-fuzz -m 512 -t 20 -i testcases -o findings-json -M fuzzer00 -- ./afl-qjson @@

The output is a nice colored summary of what’s going on, updated in real time:

AFL running over a testcase.

AFL running over a testcase.

Now: go do something else. This is supposed to run for days! So remember to launch it in a screen session, and maybe launch it via nice so that it runs with a lower priority.


After running for a while, the first findings started to appear: inputs that crashed the test program or made it run for too long. Once AFL sees such inputs, it will save them for later inspection; you will find them under the findings/fuzzername subdirectories:


If you’re lucky (well, I guess it depends how you look at it…), you will end up with inputs that indeed crash your testcase. Time to fix something!

You may also get false positives, in the form of crashes because the testcase runs out of memory. Remember that AFL imposes a strict memory limit on your executable, so if your testcase allocates too much memory and does not know how to recover from OOM it will crash. If you see many inputs crashing into AFL but not crashing when running normally, maybe your testcase is behaving properly, but just running out of memory, and increasing the memory limit passed to AFL will fix this.

The sig part in the name of each saved input should give you a hint, telling you which Unix signal caused the crash. In the listing above, signal number 11 is a SIGSEGV, which is indeed a problem. The signal 06 is SIGABRT (that is, an abort), which was generated due to running out of memory.

To reproduce this last case, just manually run the test over that input, and check that it doesn’t misbehave; then rerun it, but this time limiting its available memory via ulimit -v memory_available_in_kilobytes. If the testcase works normally but crashes under a stricter ulimit, it’s likely that you’re in an out-of-memory scenario. This may or may not require a fix in your code; it really depends whether it makes sense for your application/library to recover from an OOM.

Fixing upstream

After reporting the findings to the Security Team, it was a matter of a few days before a fix was produced, tested and merged into Qt. You can find the patches here and here.

Tips and tricks

If you want to play with AFL, I would recommend you to do a couple of things:

  • Set your CPU scaling governor to “performance”. This is for a couple of reasons: it makes no sense for the kernel to try to throttle down your CPUs if AFL is running; and it is actually a bad thing because it interferes with AFL measurements. AFL complains about this, so keep it happy and disable “powersave” or “ondemand” or similar governors.
  • Use a ramdisk for the tests. AFL needs to write a new input file every time it runs your application; for the JSON testcase above, AFL was achieving about 1000 executions/second/core. Each of this run needs a new test file as input; in addition to that, AFL needs to write stuff for its own bookkeeping.This will put your disk under very considerable stress, possibly even wear it out. Now, any modern filesystem will still flush data to disk only a few times every second (at most), but still, why hit the disk at all? One can simply create a ramdisk, and run AFL in there:
    $ mkdir afl
    # mount -t tmpfs -o size=1024M tmpfs afl/
    $ cd afl/
    $ afl-fuzz -i inputs -o findings ...
  • Do not let this run on a laptop or some other computer which may overheat. AFL is tremendously resource intensive and runs for days. If you want to get liquid cooling for your workstation, this is the perfect excuse.


Fuzzing is an excellent technique for testing code that needs to accept untrusted inputs. It is straightforward to set up and run, requires no modifications to the tested code, and it can find issues in a relatively short timespan. If your application feature parsers (especially of binary data), consider to keep AFL running over it for a while, as it may discover some serious problems. Happy fuzzing!

About KDAB

KDAB is a consulting company offering a wide variety of expert services in Qt, C++ and 3D/OpenGL and providing training courses in:

KDAB believes that it is critical for our business to contribute to the Qt framework and C++ thinking, to keep pushing these technologies forward to ensure they remain competitive.

The post Fuzzing Qt for fun and profit appeared first on KDAB.

As I just posted in the Mission Forum, our KDE Developer Guide needs a new home. Currently it is "not found" where it is supposed to be.

UPDATE: Nicolas found the PDF on, which does have the photos too. Not as good as the xml, but better than nothing.

We had great luck using markdown files in git for the chapters of the Frameworks Cookbook, so the Devel Guide should be stored and developed in a like manner. I've been reading about Sphinx lately as a way to write documentation, which is another possibility. Kubuntu uses Sphinx for docs.

In any case, I do not have the time or skills to get, restructure and re-place this handy guide for our GSoC students and other new KDE contributors.

This is perhaps suitable for a Google Code-in task, but I would need a mentor who knows markdown or Sphinx to oversee. Contact me if interested! #kde-books or #kde-soc

November 28, 2016

Kdenlive 16.12 will be released very soon, and we are trying to fix as many issues as possible. This is why we are organizing a Bug squashing day, this friday, 2nd of december 2016 between 9am and 5 pm (Central European Time – CET).

Kdenlive needs you

There are several ways you can help us improve this release, depending on your skills or interests. During the bug squashing day, Kdenlive developers will be reachable on IRC at, channel #kdenlive to answer your questions. A collaborative notepad has also been created to coordinate the efforts.

If you have some interest / knowledge in coding:
You can download Kdenlive’s source code and find instructions on our wiki. We will also be available on friday on IRC to help you setup your development environment. You can then select an ‘easy bug‘ from the notepad list and then look at the code to try to fix it. Feel free to ask your questions on IRC, the developers will guide you through the process, so that you can get familiar with the parts of the code you will be looking at.

If you are a user and encounter a bug:
You can help us by testing the Kdenlive 16.12 RC version. Our easy to install AppImage and snap packages will be updated on the 1rst of december with the latest code (Ubuntu users can also use our PPA). This will allow you to install the latest version without messing with your system. You can then check if a bug is still there is the latest version, or let us know if it is fixed.

So feel free to join us this friday, this is your chance to help the world of free software video editing !

For the Kdenlive team,
Jean-Baptiste Mardelle

Google Code-in

Google Code-in has just begun. I’ll be mentoring this time.🙂

If you know any pre-university students who are interested in computers or open source please do inform them about this. Task varies from coding, documentation, training, outreach, research, quality assurance and user interface. Also, students earn prizes for their successful completion of tasks.

What is Google Code-in ?

Google Code-in is a contest by Google to introduce pre-university students (ages 13-17) to open source software development. Since 2010, over 3200 students from 99 countries have completed work in the contest.

What I’ll be doing ?

I’ll be mentoring for tasks under WikiToLearn, KDE organization.
I have published a task related to WikiToLearn community : What can I do for WikiToLearn

I’ll be helping students with code and design for this task.

I have few other tasks in my mind. I may publish them as we move on (based on our progress).

Why I’m doing this ?

Well, I just love open source and like helping others to get into FOSS. And WikiToLearn, KDE is a great community to work with.
I strongly believe in it’s philosophy – “Knowledge only grows if shared”. It feels good to help the younger generation to get into community so that our community grows big.

Join WikiToLearn now and contribute however you can.🙂

Design, Technical Excellence and Superb User Experience

Why does a tipper truck need an app? Meiller is the leading manufacturer of tippers in Europe. KDAB software developers and UI/UX designers worked with Meiller to create a mobile app that interacts with embedded hardware on the truck, allowing drivers to diagnose and fix problems – even when on the road. KDAB shows us how technical excellence and stunning user experience go hand in hand.

The post KDAB and Meiller – Tipper Truck App appeared first on KDAB.


‘Tis the season to be jolly and as always we are just trying to be Qt /kjuːt/. We just keep on giving and giving and here is another present for you. We are hosting webinars based on the breakout sessions from The Qt World Summit 2016. So, grab a cup of cocoa and sign up for our December Tuesday webinars where you can join our R&D developers online for technical sessions that will keep your computer warm throughout 2017. The best thing of all – even if you can’t make it online – by signing up Santa will bring the recorded session to you.

Introducing Qt Visual Studio Tools

December 6th at 5 pm CET, by Maurice Kalinowski

New possibilities with Qt WebEngine

December 13th at 5 pm CET, by Allan Sandfeld Jensen

Qt Quick Scene Graph Advancements in Qt 5.8 and Beyond 

December 20th at 10 am CET (Rescheduled from November 15th), by Laszlo Agocs


Also, stay tuned for details on upcoming webinars in January!

Make sure to check our events calendar for the full list of Qt-related events delivered by us and our partners.

The post Qt World Summit 2016 Webinar Series – Christmas Sessions appeared first on Qt Blog.

After several attempts trying to write a new KCM for network configuration and actually not finishing any of them, I decided to start one more time, but this time my goal was to simply transform the old editor into a bit nicer KCM and place it into system settings where this was missing for very long time. You can see my current result below.

This is still same editor as it was existing before as a standalone application, except the list of connections is now written in QML and is similar to the applet we have in systray. I also had to rewrite the editor widget a bit because it’s currently implemented as a dialog with a tabwidget inside where each tab is represented by one setting widget (e.g. Ipv4SettingWidget), For the new KCM we now have ConnectionEditorBase widget doing all the logic behind, like creating specific setting widgets based on connection type and so on. This widget alone doesn’t display anything and you have to actually subclass it and reimplement method taking care of layouting. This allows me to have e.g. ConnectionEditorTabWidget which just subclasses ConnectionEditorBase and reimplements addWidget() method to place setting widgets into QTabWidget. In future we can also simply write a new UI/layout on top ConnectionEditorBase widget and get rid of the tab layout.

Regarding functionality, it should be already almost on par with functionality of the editor. There are still some missing features (like import/export of VPN), but besides that I think everything else is going well. With the new KCM there are also some minor improvements, like you can now reset your not-saved changes you made to a connection. My plan is to get this into Plasma 5.9 which is supposed to be released in january so I still have plenty of time to finish missing features and address issues I made during this transition and of course time to take your comments into account and make this KCM as most usable for everyone I can :).

KDE Fundraising

Have you ever felt that you wanted to give back to the KDE project? As the season of giving draws near there's never been a better time to support KDE and help the project continue to bring free software to millions of lives worldwide.

By participating in the end of year fundraiser, you can help us in our mission. Your donations are used to pay for transport and accomodation for developers to attend sprints as well as to support the server infrastructure required to keep the project running.

Donations over 30€ are eligible to recieve a cute postcard designed by KDE artists to the address of the donors choice. For more information, including more details on donation rewards, please visit the KDE End of Year 2016 Fundraising page.

FOSDEM 2016 is going to be great (again!) and you still have the chance to be one of the stars.

Have you submitted your talk to the Desktops DevRoom yet?


Remember: we will only accept proposals until December 5th. After that, the Organization Team will get busy and vote and choose the talks.

Here is the full Call for Participation, in case you need to check the details on how to submit:

FOSDEM Desktops DevRoom 2017 Call for Participation

Topics include anything related to the Desktop: desktop environments, software development for desktop/cross-platform, applications, UI, etc

November 27, 2016

This announcement is also available in Italian, Spanish and Taiwanese Mandarin.

As you have probably noticed, this move took a while to reach stable due to the issues with our main server, which resulted in a downtime of 2 days for our website and all the related services. There was nothing we could do, since our hosting provider experienced a major subsystem malfunction. The website might be a bit unstable or slow in the following days until the issue is properly fixed. We can only apologize for any inconvenience.

But the latest updates for KDE's Plasma, Applications and Frameworks series are now available to all Chakra users.

Plasma 5.8.4 includes three weeks worth of bugfixes and new translations, with changes mostly in the breeze theme, kwin and plasma workspace packages..

Applications 16.08.3 include more than 20 recorded bugfixes and improvements to ' kdepim, ark, okteta, umbrello, kmines, among others'. kdelibs was also updated to version 4.14.26.

Frameworks 5.28.0 include a new syntax-highlighting package, in addition to the usual bugfixes and improvements, mostly found in kio, plasma-framework, kwidgetsaddons and ktexteditor.

Other notable package upgrades and changes:


  • kirigami 1.1.0 has been added to the repos, a QtQuick based components set
  • openjdk 8.u112
  • cpupower 4.8.6
  • curl 7.51.0
  • dkms 2.3+git161025
  • eclipse-ecj 4.6.1
  • graphicsmagick 1.3.25
  • inetutils 1.9.4
  • libxi 1.7.8
  • ndiswrapper 1.61
  • net-tools 1.60.20160710git
  • pypy 5.6.0
  • rust 1.13.0
  • scons 2.5.1
  • sddm 0.14.0
  • tzdata 2016i

  • choqok 1.6.0
  • kdevelop 5.0.2
  • qtcreator 4.1.0

  • hugin 2016.2.0

  • wine 1.9.24
  • winetricks 20161107

    It should be safe to answer yes to any replacement question by Pacman. If in doubt or if you face another issue in relation to this update, please ask or report it on the related forum section.

    Most of our mirrors take 12-24h to synchronize, after which it should be safe to upgrade. To be sure, please use the mirror status page to check that your mirror synchronized with our main server after this announcement.
  • It’s finally done! I’m happy to tell you that Marble Maps version 1.0 has just landed in the Google Play Store (update: direct APK here if you are not using Google Play). We hope you like it as much as we do 🙂

    Many thanks to all contributors who made this possible. Thanks to a multitude of performance improvements all over the place, vector rendering has become very fast. And thanks to the ever-improving vector tile creation toolchain we are able to provide a lot more data than I anticipated some weeks ago. For the first version there are Germany and 200 cities world-wide in full detail, as well as most European countries and the USA in high detail (up to tile level 13 or 15). For the rest of the world we provide medium detail at least (up to tile level 9). The plan, of course, is to provide full vector data for the whole world in the near future.




    Admit it: how many times you have seen “software from this branch is completely untested, use it at your own risk” when you checked the latest code from any FOSS project? I bet you have, many times. For any reasonably modern project, this is not entirely true: Continuous Integration and automated testing are a huge help in ensuring that the code builds and at least does what it is supposed to do. KDE is no exception to this, thanks to and a growing number of unit tests.

    Is it enough?

    This however does not count functional testing, i.e. checking whether the software actually does what it should. You wouldn’t want KMail to send kitten pictures as a reply to a meeting invitation from your boss, for example, or you might want to test that your office suite starts and is able to actually save documents without crashing. This is something you can’t test with traditional unit testing frameworks.

    Why does this matter to KDE? Nowadays, the dream of always summer in trunk as proposed 8 years ago is getting closer, and there are several ways to run KDE software directly from git. However, except for the above strategy, there is no additional testing done.

    Or, should I rather say, there wasn’t.

    Our savior, openQA

    Those who use openSUSE Tumbleweed know that even if it is technically a “rolling release” distribution, it is extensively tested. That is made possible by openQA, which runs a full series of automated functional tests, from installation to actual use of the desktops shipped by the distribution. The recently released openSUSE Leap has also benefited from this testing during the development phase.

    “But, Luca,” you would say, “we already know about all this stuff.”

    Indeed, this is not news. But the big news is that, thanks mainly to the efforts of Fabian Vogt and Oliver Kurz, now openQA is testing also KDE software from git! This works by feeding the Argon (Leap based) and Krypton (Tumbleweed based) live media, which are roughly built daily, to openQA, and running a series of specific tests.

    You can see here an example for Argon and an example for Krypton (note: some links may become dead as tests are cleaned up, and will be adjusted accordingly). openQA tests both the distro-level stuff (the console test) and KDE specific operations (the X11 test). In the latter case, it tests the ability to launch a terminal, running a number of programs (Kate, Kontact, and a few others) and does some very basic tests with Plasma as well.

    Is it enough to test the full experience of KDE software? No, but this is a good solid foundation for more automated testing to spot functional regressions: during the openSUSE Leap 42.2 development cycle, openQA found several upstream issues in Plasma which were then communicated to the developers and promptly fixed.

    Is this enough for everything?

    Of course not. Automated testing only gets so much, so this is not an excuse for being lazy and not filing those reports. Also, since the tests run in a VM, they won’t be able to catch some issues that only occur on real hardware (multiscreen, compositing). But is surely a good start to ensure that at least obvious regressions are found before the code is actually shipped to distributions and then to end users.

    What needs to be done? More tests, of course. In particular, Plasma regression tests (handling applets, etc.) would be likely needed. But as they say, every journey starts with the first step.

    November 26, 2016

    Ark, the file archiver and compressor developed by KDE, has seen a lot of development for the upcoming 16.12 release. This blog post provides a summary of the most important changes.

    Advanced archive editing

    Thanks to the excellent GSoC work done by Vladyslav Batyrenko (mvlabat) this summer, it’s now possible to perform advanced editing operations for an archive. This means that files/folders can be moved and copied within an archive. This functionality is available either from the context menu or with the well-known keyboard shortcuts (CTRL+C, CTRL+X, CTRL+V).

    Additionally, files/folders can now be added to any subfolder of an archive. In the past files could only be added to the root of an archive. This is done by selecting a subfolder and then activating the “Add Files…” item from either the “Archive” menu or the context menu.

    Finally, files/folders can be renamed. This can be done by selecting the entry and pressing F2 or selecting Rename from the context or “File” menu.


    See mvlabat’s blog post for more info on these features.


    Choose compression method

    Ark now allows setting compression method for supported archives. This is possible for Zip and 7z archives. For instance, LZMA compression may be selected for Zip archives to improve compression ratio (requires 7z to be installed). Note that Zip archives using newer compression methods may not be supported by older unarchivers (e.g. the unzip utiliy), but should be supported by modern software such as WinZip, WinRar and 7-Zip for Windows. The compression method can be set in the Compression section when creating a new archive.

    AES-encryption for Zip archives

    Strong AES-encryption is now used by default for Zip archives when 7z is installed. Three AES key lengths are available (128, 192 and 256 bit). The classic Zip encryption method (ZipCrypto), which is now known to be vulnerable but is more widely supported, can also be selected. Again, note that e.g. unzip doesn’t support extracting archives with AES-encryption.


    Support for AR archives

    We added support for opening AR archives. This old Unix format is now mostly used for static libraries (*.a) on Linux systems. So static libraries can now be opened by Ark to view the contained object files.


    Performance improvements

    Opening large archives should be much faster with Ark 16.12. Previously, the model containing all the archive entries wasn’t created until the archive was completely loaded from disk. Now we start creating the model right away. This resulted in a greatly reduced time to open large archives.


    Progress information

    Ark now shows progress in percentage for more operations (e.g. open, extract, add) than previously. This means it’s possible to know approximately how long an operation will take. Additionally, progress is now always shown in Plasma’s system tray where operations can also be aborted. When progress in percentage is available it is also shown in the task manager item (thanks to KBroulik’s work).



    Bugfixes and under-the-hood changes

    A ton of bugs were fixed and the code architecture further modernized.

    Testing and feedback

    The  16.12 beta is now out, while the release candidate should be out on December 1st and the final release on December 15th. Please test the new features and provide feedback either as comments on this blog post or as bugs on KDE’s bugzilla.

    What’s next?

    For Ark 17.04 we hope to add a graphical interface for configuring the plugins Ark uses to handle different archive formats. Also, we are investigating whether we can use libzip to handle zip archives.
    If there are features you are missing in Ark, please let us know.


    Thanks to Elvis Angelaccio and Vladyslav Batyrenko (mvlabat) for their development work on Ark.

    November 25, 2016

    Spent a few days re-writing newsFish for Android, to bring it up to the same level as the SailfishOS version.

    This version is better tested, works with nextCloud 10, has better navigation and is generally all-round smoother.

    It also compiled against Qt 5.7, and uses QtQuick.controls, instead of custom made controls.

    It doesnt have many advanced features like adding feeds, folders or starring, but these may come.

    Get it from the Play Store:
    and please leave feedback!


    November 24, 2016

    libelektra is a configuration library and tools set. It provides very many capabilities. Here I’d like to show how to observe data model changes from key/value manipulations outside of the actual application inside a user desktop. libelektra broadcasts changes as D-Bus messages. The Oyranos projects will use this method to sync the settings views of GUI’s, like qcmsevents, Synnefo and KDE’s KolorManager with libOyranos and it’s CLI tools in the next release.

    Here a small example for connecting the org.libelektra interface over the QDBusConnection class with a class callback function:

    Declare a callback function in your Qt class header:

    public slots:
     void configChanged( QString msg );

    Add the QtDBus API in your sources:

    #include <QtDBus/QtDBus>

    Wire the org.libelektra intereface to your callback in e.g. your Qt classes constructor:

    if( QDBusConnection::sessionBus().connect( QString(), "/org/libelektra/configuration", "org.libelektra", QString(),
     this, SLOT( configChanged( QString ) )) )
     fprintf(stderr, "=================== Done connect\n" );

    In your callback arrive the org.libelektra signals:

    void Synnefo::configChanged( QString msg )
     fprintf( stdout, "config changed: %s\n", msg.toLocal8Bit().data() );

    As the number of messages are not always known, it is useful to take the first message as a ping and update with a small timeout. Here a more practical code elaboration example:

    // init a gate keeper in the class constructor:
    acceptDBusUpdate = true;
    void Synnefo::configChanged( QString msg )
      // allow the first message to ping
      if(acceptDBusUpdate == false) return;
      // block more messages
      acceptDBusUpdate = false;
      // update the view slightly later and avoid trouble
      QTimer::singleShot(250, this, SLOT( update() ));
    void Synnefo::update()
      // clear the Oyranos settings cache (Oyranos CMS specific)
      oyGetPersistentStrings( NULL );
      // the data model reading from libelektra and GUI update
      // code ...
      // open the door for more messages to come
      acceptDBusUpdate = true;

    The above code works for both Qt4 and Qt5.

    Recently the need occurred for us to run API services from user accounts rather than elevated access (i.e. root). I have since come to like this rather a lot as systemd makes this super easy and in the long run allows more self-management on regular user accounts that need to run daemon services. This is fairly ideal for unprivileged micro services run on shared servers. The basic idea is that every user can run their own systemd services and therefore every user can operate a daemon (if allowed to).

    Setting this up initially has some pitfalls though, so I thought I would write down how this is best made to work.


    First things first. To make use of this you need systemd, logind and journald. Additionally you’ll need pam_systemd and it needs to be loaded for sessions (distributions will usually set this up automatically for you, if not have fun editing /etc/pam.d/ ;)).


    We will also need the actual systemd service/unit file. Generally, everything is the same as if you were to write a regular system service. Ultimately this also means that you can use the same service file for system-wide use or user-limited use so long as the actual service doesn’t require elevated permissions for anything.

    A simple example could be this:


    Of note is the install target which will enable our service to be started by the default target (i.e. this service would get auto-started on boot).


    Before we can get started some additional settings are needed

    1. Enable lingering for the user. This allows user services to exist outside active logind sessions, consequently this needs to be done for any new user which should be able to do this.
      loginctl enable-linger $USERNAME
    2. Enable persistent journald logging. This is optional but without it users are not able to read their own logs unless in the systemd-journal system group.
      mkdir /var/log/journal && systemctl restart systemd-journald
    3. Re-log on the lingering user to make sure permissions are properly applied etc.


    To install the service file you’ll want to place it in the home-directory-bound XDG directory as described in the systemd.unit manpage. Usually this would be


    Once you placed your .service file in there you’ll probably need to reload the daemon to get it to reload the file systemctl --user daemon-reload


    Once everything is configured and installed we can get rocking by running the commands as the user itself.

    Start the service with systemctl --user start statifier.service

    Verify it started properly with systemctl --user status statifier.service

    Enable the service for autostart via target with systemctl --user enable statifier.service

    Look at the logs with journalctl  --user statifier.service


    By putting everything together you can deploy new code or changes to the service file via sftp and reload and restart the service via ssh systemctl. Allowing for really simple deployment code and no sysadmin involvement beyond the initial setup. And thanks to journald you don’t have to worry about logging since it will gobble up all output and know it came from this service.

    I for one love it!

    Older blog entries

    Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.