€ 24000 0.00
Please donate what you can to help make the Randa Meetings 2016 possible. Read more.

July 23, 2016

Nos acercamos a agosto mes que el equipo de desarrollo de KDE ha marcado para el lanzamiento de su gran revisión de sus aplicaciones. Pero antes de este gran actualización siempre hay que probar. Por eso me complace anunciar que ha sido lanzada la beta de KDE Aplicaciones 16.08. ¡Esto no para! ¡KDE Rocks!

Lanzada la beta de KDE Aplicaciones 16.08


El pasado 22 de julio el equipo de desarrollo de la Comunidad KDE anunció la beta de KDE Aplicaciones 16.08, otro paso más en la evolución de su ecosistema de programas que tiene dos objetivos fundamentales: seguir mejorando las aplicaciones KDE y continuar la migración de más aplicaciones al entorno de trabajo Qt5/KF5.

Tras un trabajo que se inició el mismo día que se lanzó KDE Aplicaciones 16.04, los desarrolladores han estado trabajando de forma silenciosa pero coordinada y constante preparando las nuevas funcionalidades que nos esperan en agosto

Ahora es el momento de congelar las funcionalidades y las dependencias, y que el equipo de desarrollo (y todas aquellas personas que así lo deseen) se centren en corregir errores y pulir las aplicaciones.

Más información: KDE.org

Pruébalo y reporta errores

Lanzada la beta de KDE Aplicaciones 16.08Todas las tareas dentro del mundo del Software Libre son importantes: desarrollar, traducir, empaquetar, diseñar, promocionar, etc. Pero hay una que se suele pasar por alto y de la que solo nos acordamos cuando las cosas no nos funcionan como debería: buscar errores.

Desde el blog te animo a que tú seas una de las personas responsables del éxito del nuevo lanzamiento de las aplicaciones de KDE. Para ello debes participar en la tarea de buscar y reportar errores, algo básico para que los desarrolladores los solucionen para que el despegue de KDE Aplicaciones 16.04 esté bien pulido. Debéis pensar que en muchas ocasiones los errores existen porque no le han aparecido al grupo de desarrolladores ya que no se han dado las circunstancias para que lo hagan.

Para ello debes instalarte esta beta y comunicar los errores que salgan en bugs.kde.org, tal y como expliqué en su día en esta entrada del blog.

Less then four months after the last release and after a lot of activity in our repository during this time, we’re happy to announce the next release of LabPlot with a lot of new features. So, be prepared for a long post.

As already announced couple of days ago, starting with this release we provide installers for Windows (32bit and 64bit) in the download section of our homepage. The windows version is not as well tested and maybe not as mature as the linux version yet and we’ll spent more time in future to improve it. Any feedback from windows users is highly welcome here!

With this release me make the next step towards providing a powerful and user-friendly environment for data and visualization. Last summer Garvit Khatri worked during GSoC2015 on the integration of Cantor, a frontend for different open-source computer algebra systems (CAS). Now the user can perform calculations in his favorite (open-source) CAS directly in LabPlot, provided the corresponding CAS is installed, and do the final creation, layouting and editing of plots and curves and the navigation in the data (zooming, shifting, scaling) in the usuall LabPlot’s way within the same environment. LabPlot recognizes different CAS variables holding array-like data and allows to select them as the source for curves. So, instead of providing columns of a spreadsheet as the source for x- and y-data, the user provides the names of the corresponding CAS-variables.

Currently supported CAS data containers are Maxima lists and Python lists, tuples and NumPy arrays. The support for R and Octave vectors will follow in one of the next releases.

Let’s demonstrate the power of this combination with the help of three simple examples. In the first example we use Maxima to generate commonly used signal forms – square, triangle, sawtooth and rectified sine waves (“imperfect waves” because of the finite truncation used in the definitions):

Maxima Example

In the second example we solve the differential equation of the forced Duffing oscillator, again with Maxima, and plot the trajectory, the phase space of the oscillator and the corresponding Poincaré map with LabPlot to study the chaotic dynamics of the oszillator:

Maxima Example

Python in combination with NumPy, SciPy, SymPy, etc. became in the scientific community a serious alternative to many other established commercial and open-source computer algebra systems. Thanks to the integration of Cantor, we can do the computation in the Python environment directly in LabPlot. In the example below we generate a signal, compute its Fourier transform and illustrate the effect of Blackman windowing on the Fourier transform. Contrary to this example, only the data is generated in python, the plots are done in LabPlot.

FFT with Python

In this release with greatly increased the number of analysis features.

Fourier transform of the input data can be carried out in LabPlot now. There are 15 different window functions implemented and the user can decide which relevant value to calculate and to plot (amplitude, magnitude, phase, etc.). Similarly to the last example above carried out in Python, the screenshot below demonstrates the effect of three window functions where the calculation of the Fourier transform was done in LabPlot directly now:

FFT with LabPlot

For basic signal processing LabPlot provides Fourier Filter (or linear filter in the frequency domain). To remove unwanted frequencies in the input data such as noise or interfering signals, low-pass, high-pass, band-pass and band-reject filters of different types (Butterworth, Chebyshev I+II, Legendre, Bessel-Thomson) are available. The example below, inspired by this tutorial, shows the signal for “SOS” in Morse-code superimposed by a white noise across a wide range of frequencies. Fourier transform reveals a strong contribution of the actual signal frequency. A narrow band-pass filter positioned around this frequency helps to make the SOS-signal clearly visible:

Fourier Filter Example

Another technique (actually a “reformulation” of the low-pass filtering) to remove unwanted noise from the true signal is smoothing. LabPlot provides three methods to smooth the data – moving average, Savitzky-Golay and percentile filter methods. The behavior of these algorithms can be controlled by additional parameters like weighting, padding mode and polynom order (for Savitzky-Golay method only).

Smoothing Example

To interpolate the data LabPlot provides several types of interpolations methods (linear, polynom, splines of different types, piecewise cubic Hermite polynoms, etc.). To simplify the workflow for many different use-cases, the user can select what to evaluate and to plot on the interpolations points – function, first derivative, second derivative or the integral. The number of interpolation points can be automatically determined (5 times the number of points in the input data) or can be provided by the user explicitly.

More new analysis features and the extension of the already available feature set will come in the next releases.

Couple of smaller features and improvements were added. The calculation of many statistical quantities was implemented for columns and rows in LabPlot’s data containers (spreadsheet and matrix):

Column Statistics

Furthermore, the content of the data containers can be exported to LaTeX tables. The appearance of the final LaTeX output can be controlled via several options in the export dialog.

LaTeX Export

To further improve the usability of the application, we implemented filter and search capabilities in the drop down box for the selection of data sources. In projects with large number of data sets it’s much easier and faster now to find and to use the proper data set for the curves in plots.

A new small widget for taking notes was implemented. With this, user’s comments and notes on the current activities in the the project can be stored in the same project file:

Notes Example

To perform better on a large number of data points, we implemented the double-buffering for curves. Currently, applying this technique in our code worsens the quality of the plotted curves a bit. We decided to introduce a configuration parameter to control this behavior during the run-time. On default, the double buffering is used and the user benefits from the much better performance. Users who need the best quality should switch off this parameter in the application settings dialog. We’ll fix this problem in future.

The second performance improvement coming in version 2.3.0 is the much faster generation of random values in the spreadsheet.

There are still many features in our development pipeline, couple of them being currently already worked on. Apart from this, this summer again we get contribution from three “Google Summer of Code” students working on the support for FITS, on a theme manager for plots and on histograms.

You can count on many new cool features in the near future!

Randa Meetings 2016 Fundraiser Campaign

Hoy seguimos con la cuarta entre de la sección que inauguré en el blog hace unas semanas con la entrevista a Javier Viñal. Recordemos que la sección se llamaba Traduciendo KDE y en ella se mostrarán una serie de entrevistas que se han realizado al magnífico equipo de traducción de KDE, gracias al cual podemos disfrutar del trabajo de los programadores KDE en multitud de idiomas. El protagonista de esta cuarta entrevista es Rocío Gallego, miembro del equipo de traducción al español de KDE. Espero que sea de vuestro agrado.

Rocío Gallego– Traduciendo KDE (IV)

rocgalor_operahousePara empezar me gustaría que te presentaras tu misma a los lectores del blog.
Me llamo Rocío Gallego y soy traductora profesional. Durante más de diez años me dediqué al desarrollo de software y a la consultoría informática en los entornos de SAP y Siebel, pero hace unos años tuve que adaptarme a las nuevas circunstancias familiares y desde entonces, me dedico a la traducción técnica.
¿Colaborar con KDE te da mucho trabajo? ¿Hay temporadas de mucha actividad y otras de poca?

En realidad, la traducción de KDE, o localización como decimos los traductores, te exige tanta dedicación como tú desees. Depende de la cantidad de paquetes de los que te quieras hacer cargo.

Siempre es interesante conocer como te iniciaste en este mundo. ¿cómo fue?

En mi profesión, necesito utilizar muchos tipos de aplicaciones y Linux me ofrece la posibilidad de elegir entre un gran número de excelentes aplicaciones de uso libre, gratuito y totalmente legal, incluido el escritorio que utilizo. Me pareció que dedicar parte de mi tiempo y de mis habilidades a la traducción de KDE era lo mínimo que podía hacer a cambio.

La motivación de un traductor

Casi ya me has contestado pero seamos más específicos ¿Cuál es tu motivación y qué te aporta tu labor de traductor?

Me encanta traducir y me parece que resulta muy útil para que estupendas herramientas puedan llegar a más personas que de otra forma no podrían utilizarlas. Por esta razón y por lo que te comentaba en la pregunta anterior, me resulta una labor muy satisfactoria.

Los juegos de KDE

¿Qué aplicaciones o partes de KDE has traducido?

Actualmente,me encargo de los siguientes paquetes: applications, extragear-base, extragear-sysadmin, kdeaccessibility, kdeadmin, juegos educativos de kdeedu, kdewebdev y www.
También me encargo de la documentación de frameworks y kde-runtime.
Anteriormente me encargué también de los juegos: kdegames y playground-games


Creo que nadie puede dudar de tu gran trabajo e implicación. De echo es posible que algún lector se motive y quiera echar una mano ¿cómo puede empezar?
En la página web de nuestro equipo de traducción de KDE al español, hay una sección que explica con todo detalle cómo empezar a colaborar. Os dejo aquí el enlace: http://es.l10n.kde.org/empezar.php.

Preguntas rápidas para Rocío Gallego

Pasemos ahora a una ronda de cuestiones sencillas pero interesantes. ¿KDE 4 o Plasma 5?

Plasma 5

Aplicación que más te gusta de KDE

Es difícil elegir. Me resultan muy útiles KsnapShot y Okular por su versatilidad.

Aplicación que te gustaría tener nativa en KDE

Una aplicación para reconocimiento de voz y dictado.


Y ahora tu minuto de oro, cuenta lo que quieras para los lectores de KDE Blog.

Pues solo decir que animo a los usuarios de KDE a unirse a la comunidad y a aportar sus habilidades para que nuestro escritorio favorito sea cada vez mejor y llegue a más personas.

Un placer leerte Rocío, de todo corazón. Ojala algún día nos veamos en una Akademy, un Sprint o cualquier otro evento. Gracias por tu tiempo

Big-ass Disclaimer: What follows is purely personal opinion as a geek, technophile, and free and open source software (FOSS) and Linux user. It does not, in any way, reflect the opinions of my employer.

As a geek writing about gadgets and technology, I often find myself drooling over the latest innovations and insanity in the industry. But as an open source and Linux user, I lament over how majority, if not all of those are pretty but caged gardens. There are, of course, existing “open” systems, but each one represents a compromise for me. Android is basically an open source code dump, with development happening behind closed doors. And it is so far removed from the Linux that we’ve grown to know and love that it’s barely recognizable as Linux. Sailfish OS has technology closer to my heart (like Qt) and is indeed closer to a traditional Linux system, but its availability on actual commercial devices with modern, decent specs leave much to be desired. Mer/Nemo is far from being something usable even if you install it on, say, a Nexus device.

So when Ubuntu and Canonical revealed they were partnering with actual, big manufacturers for Ubuntu mobile devices, a spark of hope was rekindled in my heart. Let it be clear, I am by no means an Ubuntu user, not even a fan. I left the fold nearly a decade ago, after having spent quite some time using and contributing to Kubuntu (to the point of becoming a certified “member” even, though I never ascended to the Council). In terms of loyalties and usage, I am a KDE user (and “helper”) foremost. I use Fedora because it just works for me, for now. So, yes, an Ubuntu Touch device would be another compromise for me, but it would be the smallest one. Or so I hoped.

When opportunity knocked offering the chance to review two of the latest commercial Ubuntu Touch devices, well, let’s just say it didn’t have to knock twice. To be specific, I got my hands on a bq Aquaris M10 first, and then a Meizu PRO 5. I’ve already written thousands of words on both (not exaggerating on the numbers), so I’m not going to regurgitate them here. For the curious, here are the links to those reviews:

In terms of design and hardware, the two couldn’t be more different. The bq Aquaris M10 tablet is plastic, mid-range, and, at a glance, quite plain. The Meizu PRO 5 is metal and boasted of 2015 flagship hardware. It also looked nice to boot. But surprisingly, the Aquaris M10 was able to perform admirably, despite choking a few times on more resource intensive work. Hardware-wise, I really have no complaints, as they work as advertised. And those happen to also be the least interesting aspects of the devices.

The App Situation

I’m here to write more about the software experience, the Ubuntu Touch experience. The defining feature of these two otherwise fully Android devices. I wish it were otherwise, but I have been sorely disappointed with the end result of my tour. Perhaps I set my expectations to high, or perhaps I put too much faith in the marketing spiel. The good news is that the journey isn’t over yet and the story might very well still change.

While Ubuntu Touch might look and feel like a regular Linux system on top, albeit with the new, more touch-friendly version of Unity 8, beneath has a few things in common with Android, apart from using Android drivers. For better or for worse, Ubuntu Touch adopts the Android convention of a read-only system partition. In a nutshell, even while you can actually gain root, or sudo rather, far easier than on an Android system, you cannot effect permanent changes. In short, you can’t really install software via the age-old apt-get (just apt now) method. Actually you can, but only by explicitly making the system partition read-write. But if you do that, you will no longer be able to receive OTA updates, which, based on experience, are very, very desirable. For example, the most recent OTA-11 added Miracast support, which meant that the Meizu PRO 5, whose USB-C port doesn’t support HDMI out, can finally use Convergence. The next OTA will even add fingerprint scanner support.

So you have a choice between hacking your way to be able to install regular package the regular way (as long as they are built for ARM) and depriving yourself of OTAs, or sticking to the default settings and only the apps available from the Ubuntu Store. Neither is ideal and both are unacceptable.

The selection of apps in the Store is dismal, to be blunt. Of the few that are there, around 80% of them are simply wrappers around web apps or web pages. And while there are some who would rally behind web apps, and in this case they do actually save Ubuntu Touch some space, the ugliness of their unoptimized experiences is easily seen and painfully felt. On the Meizu PRO 5, the situation is a bit worse. In regular mobile/phone mode, the browser’s user agent is fixed as an Android device, which means web services will try to force you to use the Android app instead and will not go further. Yes, those web services are partly to blame, but the default browser app has no easy feature to change the user agent. Conversely, if you switch to desktop mode, it identifies itself as a desktop browser and things work as intended. To cut to the chase, there is a severe lack of usable native apps, written in Ubuntu Touch’s preferred QML style, that would make even Windows 8’s store seem like a jam packed party.


But wait, there’s light at the end of the tunnel, right? There’s that Convergence that Ubuntu, Canonical, and Mark Shulttleworth has been singing about, right? Well, yes and no. For the unfamiliar, Convergence is a nifty concept that, in a nutshell, means that you can use your phone as your desktop if you give it the proper peripherals, which, in this case, means an external display, a keyboard, and a mouse. Microsoft later revealed a similar concept called Continuum (for Mobile), but the difference between the two is that Convergence doesn’t limit you in what type of software you can run. On Microsoft’s Continuum, only blessed “Universal” apps can run when in desktop mode. On Convergence, it’s a free for all. In fact, you can even run those conventional desktop apps while in mobile mode. At least that’s the theory.

In practice, Convergence has a few gaping holes that all sum up to one thing: it blocks the very productivity it promises to enable by turning your smartphone into a desktop.

First, there is the fact that you can’t even install those desktop apps in the first place. There is Firefox, LibreOffice, GIMP, and Gedit preinstalled on the bq Aquaris M10 and only those. The Meizu PRO 5 is worse off, as it doesn’t even have those. Those are coming, I was told, in a future update. Perhaps the reasoning was that the Meizu PRO 5 initially didn’t support Convergence anyway, but now that it does, the apps should be installed. But good luck installing any other desktop app. Remember the above? You can’t. At least not directly and not easily. It’s possible, but you’ll have to work for it and even then you will be sacrificing some things to gain other things.

And then there’s my pet peeve: copy and paste. Those don’t work. Or rather, it only works between native Ubuntu Touch apps. It doesn’t work between Ubuntu Touch and desktop apps. Heck, it doesn’t even work between desktop apps. I’m not completely 100% sure of the rationale, but I have a hunch that it all ultimately boils down to the fact that Ubuntu Touch uses Mir for its windowing system, not X11. Those two, especially their clipboards, just don’t mix well. In order to allow desktop apps to run in a Mir environment, Ubuntu had to implement a sandboxing environment, which also has a nasty side effect. Each sandbox, therefore each desktop app, has its own “view” of the file system, its own “view” of the user’s home directories. Saving a file in one app is no assurance you’ll see the file in a different app. Sometimes it works, sometimes it doesn’t.

So yes, I blame Mir, both for the clipboard and the necessity to sandbox desktop apps. To be fair, Wayland has a similar clipboard interoperability problem with X11, but probably a saner solution. I have more confidence, at least, that the Plasma developers, particular Kwin, already have this problem in mind.

A Never Ending Story

I am, fortunately, not going to end it on a sad note, because I still have hope. Somewhat. Ubuntu Touch practically has rolling releases and, from what I’ve gathered, the next OTA will bring quite a few goodies. Libertine (Convergence) apps, fingerprint scanner, etc. I have no idea if the copy/paste problem will also be resolved by then. Suffice it to say, I’ve never been excited over an Ubuntu release since 2008.

That said, it’s not going to fix all my gripes. There will still be a great lack in native apps, web apps will still work terribly, if at all, and you still can’t install apps through apt-get. The latter probably isn’t going to happen unless Ubuntu and Canonical change directions a bit. I’m not that averse at doing the work needed to actually get a somewhat regular Linux system working underneath Ubuntu Touch. As long as I can be assured I can go back to a pristine copy should I want to get OTA updates again. I’ll figure it out soon enough, but only after OTA 12!

Dreaming of Kogs and Gears

That adventure in Ubuntu Touch land has had me once again pining for a KDE-made solution (KDE being the community now, not the desktop software). There has been and still continues to be efforts in that area, but not yet to the same extent as Ubuntu Touch. The community, or even some of the companies supporting its development, just doesn’t have the same resources as Canonical to have a commercial device as well. Until that happens, which has disadvantages, we’ll have to rely on sheer community power.

Sadly, my experience with even the latest Plasma Mobile project has been rather depressing. Not because of the state of the software but because how I seem to hit corner cases of things not working when it works for others. I guess I just have terrible luck. I do want to try again, and maybe I’ll have a different and better experience this time. But that will be another story for another time.

July 22, 2016

Because of unforeseen circumstances, we had to rejig our release schedule, there was no release last week. Still, we wanted to bring you a foretaste of some of the goodies that are going to be in the 3.0.1 release, which is now planned for September 5th. There’s lots to play with, here, from bug fixes (the double dot in file names is gone, the crash with cheap tablets is gone, a big issue with memory leaks in the graphics card is solved), to features (soft-proofing, among others). There may also be new bugs, and not all new features may be working correctly. Export to animated gif or video clips is still in development, and probably will not work well outside the developers’ computer.

After all, this is a development snapshot. Still, please experiment with it and test it, and we’re pretty sure that for many purposes it might be better already than the 3.0 stable release!

On the OSX front we’ve made progress, too, and this build should run fine on OSX 10.9 upwards, including 10.10. All libraries are updated to the latest version, too.


On Windows, Krita supports Wacom, Huion and Yiynova tablets, as well as the Surface Pro series of tablets. The portable zip file builds can be unzipped and run by double-clicking the krita link.

Krita on Windows is tested on Windows 7, Windows 8 and Windows 10. There is only a Windows 64 bits build for now. There is now also a debug build that together with the DrMingw debugger can help with finding the cause of crashes. See the new FAQ entry. The Windows builds can be slower than usual because vectorization is disabled.


For Linux, we offer AppImages that should run on any reasonable recent Linux distribution. You can download the appimage, make it executable and run it in place. No installation is needed. At this moment, we only have appimages for 64 bits versions of Linux.

OSX and MacOS

Krita on OSX will be fully supported with version 3.1. Krita 3.0 for OSX is still missing Instant Preview and High Quality Canvas scaling. There are also some issues with rendering the image — these issues follow from Apple’s decision to drop support for the OpenGL 3.0 compatibility profile in their display drivers and issue with touchpad and tablet support. We are working to reimplement these features using OpenGL 3.0 Core profile. For now, we recommend disabling OpenGL when using Krita on OSX for production work. Krita for OSX runs on 10.9, 10.10, 10.11 and is reported to work on MacOS too.

Hace tiempo que no hablo de iconos para Plasma. La razón principal es que la fuente de información sobre ellos me ha fallado. Con la reciente remodelación de OpenDesktop todavían no han conseguido restaurar los rss, así que me se me ha pasado comentaros la aparición de algunos temas de iconos, como La Capitaine. Es el momento de ponernos al día.

La Capitaine, nuevo tema de iconos para Plasma 5

De la mano de krourke nos llega La Capitaine, un magnífico tema de iconos para Plasma 5 inspirado, como lo fue originalmente Antü, en Mac OS X y las ideas Material Design de Google. En otras palabras La Capitaine es una obra que deriva de Antü (antes El General) y Numix Circle.

Cada icono de La Capitaine es un vector escalable con lo que se garantiza su calidad para cualquier tamaño. El autor destaca que sigue trabajando en el tema añadiendo nuevos iconos y actualizando los ya realizados. Evidentemente, el objetivo es conseguir que el tema sea lo más completo posible.

Por este motivo, el paquete de iconos es actualizado bastante a menudo con lo que se aconseja clonar el repositorio git y revisarlo cada poco tiempo para estar al día.

La Capitaine

Cómo instalar La Capitaine

Se pueden seguir diversos caminos para hacer la instalación La Capitaine:

En definitiva, un buen tema para tener personalizada nuestro escritorio. Por cierto, si os gusta no olvidéis darle un + en KDE-Look y compartir el mismo en vuestras redes sociales. Estoy seguro que el autor os lo agradecerá.

Más información: KDE-Look



The first point release update to our LTS release 16.04 is out now. This contains all the bugfixes added to 16.04 since its first release in April. Users of 16.04 can run the normal update procedure to get these bugfixes.

See the 16.04.1 release announcement.

Download 16.04.1 images.


I Have a few favorites kde conventions that I really love to participate.

Randa and Lakademy are always awesome, both are focused on hacking, and I surely do love to hack.

On LaKademy I spend my days working on subsurface, reworking on the interface, trying to make it more pleasant to the eye,

In Randa I worked on KDevelop and Marble, but oh my…

I spend a few days working on KDevelop, in one of bugs that where preventing the release of the 5.0, I’v tried a bit if things with help from Kevin Funk and Aleix Pol, but everything that I fixed created another two corner cases, in the third day trying I stopped and went to work on  marble instead so I could clear my head. My Patch had almost 500 lines already, and more than 20 commits – Something told me that there was something wrong with the approach, but I actually didn’t know what else to try.

The problem was a widget being deleted inside of Qt’s Event Loop when a focus change occoured – the code should have prevented the focus loss, but  didn’t, a crash occoured instead.

Now, when I got back to Brazil I realized that the bug was fixed, so someone had worked on top of it and my patch should have been discarded… I went to look how the other developer fixed it, and at first I didn’t understood. He was working with invokeMetaMethod instead of calling the method directly (this + a few other checks), to summarize… he fixed in 20 lines what I didn’t managed to fix in 500.

It was a really good learning experience for me, I wouldn’t ever have tougth to use invokeMetaMethod inside of the Event Loop.


July 21, 2016

GammaRay 2.5 has been released, the biggest feature release yet of our Qt introspection tool. Besides support for Qt 5.7 and in particular the newly added Qt 3D module a slew of new features awaits you, such as access to QML context property chains and type information, object instance statistics, support for inspecting networking and SSL classes, and runtime switchable logging categories.

We also improved many existing functionality, such as the object and source code navigation and the remote view. We enabled recursive access to value type properties and integrated the QPainter analyzer in more tools.

GammaRay is now also commercially available as part of the Qt Automotive suite, which includes integration with QtCreator for convenient inspection of embedded targets using Linux, QNX, Android or Boot2Qt.

Download GammaRay

The post GammaRay 2.5 release appeared first on KDAB.

KDStateMachineEditor is a Qt-based framework for creating Qt State Machine metacode using a graphical user interface. It works on all major platforms and is now available as part of the Qt Auto suite.

The latest release of KDAB’s KDStateMachineEditor includes changes to View, API and Build system.


  • Button added to show/hide transition labels
  • Now using native text rendering
  • Status bar removed


  • API added for context menu handling (cf. StateMachineView class)

Build system

  • Toolchain files added for cross-compiling (QNX, Android, etc.)
  • Compilation with namespaced Qt enabled
  • Build with an internal Graphviz build allowed (-DWITH_INTERNAL_GRAPHVIZ=ON)

KDStateMachineEditor Works on all major platforms and has been tested on Linux, OS X and Windows.

Prebuilt packages for some popular Linux distributions can be found here.

Homebrew recipe for OSX users can be found here.

The post KDStateMachineEditor 1.1.0 released appeared first on KDAB.

Hello, and welcome to the usual appointment with a new release of Qt!

Qt 5.7 has just been released, and once more, KDAB has been a huge part of it (we are shown in red on the graph):

Qt Project commit stats, up to June 2016. From http://www.macieira.org/blog/qt-stats/

Qt Project commit stats, up to June 2016. From http://www.macieira.org/blog/qt-stats/

In this blog post I will show some of the outstanding contributions by KDAB engineers to the 5.7 release.

Qt 3D

The star of Qt 5.7 is the first stable release of Qt 3D 2.0. The new version of Qt 3D is a total redesign of its architecture into a modern and streamlined 3D engine, exploiting modern design patterns such as entity-component systems, and capable to scale due to the heavily threaded design. This important milestone was the result of a massive effort done by KDAB in coordination with The Qt Company.


If you want to know more about what Qt 3D can do for your application, you can watch this introductive webinar recorded by KDAB’s Dr. Sean Harmer and Paul Lemire for the 5.7 release.

Qt on Android

Thanks to KDAB’s BogDan Vatra, this release of Qt saw many improvements to its Android support. In no particular order:

  • Qt can now be used to easily create Android Services, that is, software components performing background tasks and that are kept alive even when the application that started them exits. See here for more information.
  • The QtAndroidExtras module gained helper functions to run Runnables on the Android UI thread. They are extremely useful for accessing Android APIs from C++ code that must be done on Android UI thread. More info about this is available in this blog post by BogDan.
  • Another addition to the QtAndroidExtras module is the QtAndroid::hideSplashScreen function, which allows a developer to programmatically hide the splash screen of their applications.
  • The QtGamepad module gained Android support.

Performance and correctness improvements

A codebase as big as Qt needs constant fixes, improvements and bugfixes. Sometimes these come from bug reports, sometimes by reading code in order to understand it better, and in some other cases by analyzing the codebase using the latest tools available. KDAB is committed to keeping Qt in a great shape, and that is why KDAB engineers spend a lot of time polishing the Qt codebase.

Some of the results of these efforts are:

  • QHash gained equal_range, just like QMap and the other STL associative container. This function can be used to iterate on all the values of a (multi)hash that have the same key without performing any extra memory allocation. In other words, this code:
    // BAD!!! allocates a temporary QList 
    // for holding the values corresponding to "key"
    foreach (const auto &value, hash.values(key)) {

    can be changed to

    const auto range = hash.equal_range(key);
    for (auto i = range.first; i != range.second; ++i) {

    which never throws (if hash is const), expands to less code and does not allocate memory.

  • Running Qt under the Undefined Behavior Sanitizer revealed dozens of codepaths where undefined behaviour was accidentally triggered. The problems ranged from potential signed integer overflows and shift of negative numbers to misaligned loads, invalid casts and invalid calls to library functions such as memset or memcpy. KDAB’s Senior Engineer Marc Mutz contributed many fixes to these undefined behaviours, fixes that made their way into Qt 5.6.1 and Qt 5.7.
  • Some quadratic loops were removed from Qt and replaced with linear or linearithmic ones. Notably, an occurrence of such loops in the Qt Quick item views caused massive performance degradations when sorting big models, which was fixed in this commit by KDAB’s engineer Milian Wolff.
  • Since Qt 5.7 requires the usage of a C++11 compiler, we have starting porting foreach loops to ranged for loops. Ranged for loops expand to less code (because there is no implicit copy taking place), and since compilers recognize them as a syntactic structure, they can optimize them better. Over a thousand occurrences were changed, leading to savings in Qt both in terms of library size and runtime speed.
  • We have also started using C++ Standard Library features in Qt. While Qt cannot expose STL types because of its binary compatibility promise, it can use them in its own implementation. A big advantage of using STL datatypes is that they’re generally much more efficient, have more features and expand to a lot less code than Qt counterpart. For instance, replacing some QStack usages with std::stack led to 1KB of code saved per instance replaced; and introducing std::vector in central codepaths (such as the ones in QMetaObjectBuilder) saved 4.5KB.
  • While profiling Qt3D code, we found that the mere act of iterating over resources embedded in an application (by means of QDirIterator) uncompressed them. Then, reading a given resource via QFile uncompressed it again. This was immediately fixed in this commit by KDAB’s Director of Automotive, Volker Krause.

Other contributions

Last but not least:

  • It is now possible to use the Qt Virtual Keyboard under QtWayland compositors.
  • The clang-cl mkspec was added. This mkspec makes it possible to build Qt using the Clang frontend for MSVC. Stay tuned for more blog posts on this matter. 🙂
  • A small convenience QFlag::setFlag method was added, to set or unset a flag in a bitmask without using bitwise operations.

About KDAB

KDAB is a consulting company dedicated to Qt and offering a wide variety of services and providing training courses in:

KDAB believes that it is critical for our business to invest in Qt3D and Qt, in general, to keep pushing the technology forward, ensuring it remains competitive.

The post KDAB contributions to Qt 5.7 appeared first on KDAB.

Are you a victim of premature pessimisation? Here’s a short definition from Herb Sutter:

Premature pessimization is when you write code that is slower than it needs to be, usually by asking for unnecessary extra work, when equivalently complex code would be faster and should just naturally flow out of your fingers.

Despite how amazing today’s compilers have become at generating code, humans still know more about the intended use of a function or class than can be specified by mere syntax. Compilers operate under a host of very strict rules that enforce correctness at the expense of faster code. What’s more, modern processor architectures sometimes compete with C++ language habits that have become ingrained in programmers from decades of previous best practice.

I believe that if you want to improve the speed of your code, you need to adopt habits that take advantage of modern compilers and modern processor architectures—habits that will help your compiler generate the best-possible code. Habits that, if you follow them, will generate faster code before you even start the optimisation process.

Here’s four habit-forming tips that are all about avoiding pessimisation and, in my experience, go a long way to creating faster C++ classes.

1) Make use of the (named-) return-value optimisation

According to Lawrence Crowl, (named-) return-value optimisation ((N)RVO) is one of the most important optimisations in modern C++. Okay—what is it?

Let’s start with plain return-value optimization (RVO). Normally, when a C++ method returns an unnamed object, the compiler creates a temporary object, which is then copy-constructed into the target object.

MyData myFunction() {
    return MyData(); // Create and return unnamed obj

MyData abc = myFunction();

With RVO, the C++ standard allows the compiler to skip the creation of the temporary, treating both object instances—the one inside the function and the one assigned to the variable outside the function—as the same. This usually goes under the name of copy elision. But what is elided here is the temporary and the copy.

So, not only do you save the copy constructor call, you also save the destructor call, as well as some stack memory. Clearly, elimination of extra calls and temporaries saves time and space, but crucially, RVO is an enabler for pass-by-value designs. Imagine MyData was a large million-by-million matrix. There mere chance that some target compiler could fail to implement this optimisation would make every good programmer shy away from return-by-value and resort to out parameters instead (more on those further down).

As an aside: don’t C++ Move Semantics solve this? The answer is: no. If you move instead of copy, you still have the temporary and its destructor call in the executable code. And if your matrix is not heap-allocated, but statically sized, such as a std::array<std::array<double, 1000>, 1000>>, moving is the same as copying. With RVO, you mustn’t be afraid of returning by value. You must unlearn what you have learned and embrace return-by-value.

Named Return Value Optimization is similar but it allows the compiler to eliminate not just rvalues (temporaries), but lvalues (local variables), too, under certain conditions.

What all compilers these days (and for some time now) reliably implement is NRVO in the case where there is a single variable that is passed to every return, and declared at function scope as the first variable:

MyData myFunction() {
    MyData result;           // Declare return val in ONE place

    if (doing_something) {
        return result;       // Return same val everywhere
    // Doing something else
    return result;           // Return same val everywhere

MyData abc = myFunction();

Sadly, many compilers, including GCC, fail to apply NRVO when you deviate even slightly from the basic pattern:

MyData myFunction() {
    if (doing_something)
        return MyData();     // RVO expected

    MyData result;

    // ...
    return result;           // NRVO expected

MyData abc = myFunction();

At least GCC fails to use NRVO for the second return statement in that function. The fix in this case is easy (go back to the first version), but it’s not always that easy. It is an altogether sad state of affairs for a language that is said to have the most advanced optimisers available to it for compilers to fail to implement this very basic optimisation.

So, for the time being, get your fingers accustomed to typing the classical NRVO pattern: it enables the compiler to generate code that does what you want in the most efficient way enabled by the C++ standard.

If diving into assembly code to check whether a particular patterns makes your compiler drop NRVO isn’t your thing, Thomas Brown provides a very comprehensive list of compilers tested for their NRVO support and I’ve extended Brown’s work with some additional results.

If you start using the NVRO pattern but aren’t getting the results you expect, your compiler may not automatically perform NRVO transformations. You may need to check your compiler optimization settings and explicitly enable them.

Return parameters by value whenever possible

This is pretty simple: don’t use “out-parameters”. The result for the caller is certainly kinder: we just return our value instead of having the caller allocate a variable and pass in a reference. Even if your function returns multiple results, nearly all of the time you’re much better off creating a small result struct that the function passes back (via (N)RVO!):

That is, instead of this:

void convertToFraction(double val, int &numerator, int &denominator) {
    numerator = /*calculation */ ;
    denominator = /*calculation */ ;

int numerator, denominator;
convertToFraction(val, numerator, denominator); // or was it "denominator, nominator"?

You should prefer this:

struct fractional_parts {
    int numerator;
    int denominator;

fractional_parts convertToFraction(double val) {
    int numerator = /*calculation */ ;
    int denominator = /*calculation */ ;
    return {numerator, denominator}; // C++11 braced initialisation -> RVO

auto parts = convertToFraction(val);

This may seem surprising, even counter-intuitive, for programmers that cut their teeth on older x86 architectures. You’re just passing around a pointer instead of a big chunk of data, right? Quite simply, “out” parameter pointers force a modern compiler to avoid certain optimisations when calling non-inlined functions. Because the compiler can’t always determine if the function call may change an underlying value (due to aliasing), it can’t beneficially keep the value in a CPU register or reorder instructions around it. Besides, compilers have gotten pretty smart—they don’t actually do expensive value passing unless they need to (see the next tip). With 64-bit and even 32-bit CPUs, small structs can be packed into registers or automatically allocated on the stack as needed by the compiler. Returning results by value allows the compiler to understand that there isn’t any modification or aliasing happening to your parameters, and you and your callers get to write simpler code.

3) Cache member-variables and reference-parameters

This rule is straightforward: take a copy of the member-variables or reference-parameters you are going to use within your function at the top of the function, instead of using them directly throughout the method. There are two good reasons for this.

The first is the same as the tip above—because pointer references (even member-variables in methods, as they’re accessed through the implicit this pointer) put a stick in the wheels of the compiler’s optimisation. The compiler can’t guarantee that things don’t change outside its view, so it takes a very conservative (and in most cases wasteful) approach and throws away any state information it may have gleaned about those variables each time they’re used anew. And that’s valuable information that can help the compiler eliminate instructions and references to memory.

Another important reason is correctness. As an example provided by Lawrence Crowl in his CppCon 2014 talk “The Implementation of Value Types”, instead of this complex number multiplication:

template <class T> 
complex<T> &complex<T>;::operator*=(const complex<T> &a) {
   real = real * a.real – imag * a.imag;
   imag = real * a.imag + imag * a.real;
   return *this;

You should prefer this version:

template <class T> 
complex<T> &complex<T>;::operator*=(const complex<T> &a) {
   T a_real = a.real, a_imag = a.imag;
   T t_real =   real, t_imag =   imag; // t == this
   real = t_real * a_real – t_imag * a_imag;
   imag = t_real * a_imag + t_imag * a_real;
   return *this;

This second, non-aliased version will still work properly if you use value *= value to square a number; the first one won’t give you the right value because it doesn’t protect against aliased variables.

To summarise succinctly: read from (and write to!) each non-local variable exactly once in every function.

4) Organize your member variables intelligently

Is it better to organize member variables for readability or for the compiler? Ideally, you pick a scheme that works for both.

And now is a perfect time for a short refresher about CPU caches. Of course data coming from memory is very slow compared to data coming from a cache. An important fact to remember is that data is loaded into the cache in (typically) 64-byte blocks called cache lines. The cache line—that is, your requested data and the 64 bytes surrounding it—is loaded on your first request for memory absent in the cache. Because every cache miss silently penalises your program, you want a well-considered strategy for ensuring you reduce cache misses whenever possible. Even if the first memory access is outside the cache, trying to structure your accesses so that a second, third, or forth access is within the cache will have a significant impact on speed. With that in mind, consider these tips for your member-variable declarations:

  • Move the most-frequently-used member-variables first
  • Move the least-frequently-used member-variables last
  • If variables are often used together, group them near each other
  • Try to reference variables in your functions in the order they’re declared

Nearly all C++ compilers organize member variables in memory in the order in which they are declared. And grouping your member variables using the above guidelines can help reduce cache misses that drastically impact performance. Although compilers can be smart about creating code that works with caching strategies in a way that’s hard for humans to track, the C++ rules on class layout make it hard for compilers to really shine. Your goal here is to help the compiler by stacking the deck on cache-line loads that will preferentially load the variables in the order you’ll need them.

This can be a tough one if you’re not sure how frequently things are used. While it’s not always easy for complicated classes to know what member variables may be touched more often, generally following this rule of thumb as well as you can will help. Certainly for the simpler classes (string, dates/times, points, complex, quaternions, etc) you’ll probably be accessing most member variables most of the time, but you can still declare and access your member variables in a consistent way that will help guarantee that you’re minimizing your cache misses.


The bottomline is that it still takes some amount of hand-holding to get a compiler to generate the best code. Good coding-habits are by no means the end-all, but are certainly a great place to start.

The post Four Habit-Forming Tips to Faster C++ appeared first on KDAB.

I'm a lot closer to finishing the project now. Thanks to some great support from my GSoC mentor, my project has turned out better than what I had written about in my proposal! Working together, we've made a lot of changes to the project.

For starters, we've changed the name of the ioslave from "File Tray" to "staging" to "stash". I wasn't a big fan of the name change, but I see the utility in shaving off a couple of characters in the name of what I hope will be a widely used feature.

Secondly, the ioslave is now completely independent from Dolphin, or any KIO application for that matter. This means it works exactly the same way across the entire suite of KIO apps. Given that at one point we were planning to make the ioslave fully functional only with Dolphin, this is a major plus point for the project.

Next, the backend for storing stashed files and folders has undergone a complete overhaul. The first iteration of the project stored files and folders by saving the URLs of stashed items in a QList in a custom "stash" daemon running on top of kded5. Although this was a neat little solution which worked well for most intents and purposes, it had some disadvantages. For one, you couldn't delete and move files around on the ioslave without affecting the source because they were all linked to their original directories. Moreover, with the way 'mkdir' works in KIO, this solution would never work without each application being specially configured to use the ioslave which would entail a lot of groundwork laying out QDBus calls to the stash daemon. With these problems looming large, somewhere around the midterm evaluation week, I got a message from my mentor about ramping up the project using a "StashFileSystem", a virtual file system in Qt that he had written just for this project.

The virtual file system is a clever way to approach this - as it solved both of the problems with the previous approach right off the bat - mkdir could be mapped to virtual directory and now making volatile edits to folders is possible without touching the source directory. It did have its drawbacks too - as it needed to stage every file in the source directory, it would require a lot more memory than the previous approach. Plus, it would still be at the whims of kded5 if a contained process went bad and crashed the daemon.

Nevertheless, the benefits in this case far outweighed the potential cons and I got to implementing it in my ioslave and stash daemon. Using this virtual file system also meant remapping all the SlaveBase functions to corresponding calls to the stash daemon which was a complete rewrite of my code. For instance, my GitHub log for the week of implementing the virtual file system showed a sombre 449++/419--. This isn't to say it wasn't productive though - to my surprise the virtual file system actually worked better than I hoped it would! Memory utilisation is low at a nominal ~300 bytes per stashed file and the performance in my manual testing has been looking pretty good.

With the ioslave and other modules of the application largely completed, the current phase of the project involves integrating the feature neatly with Dolphin and for writing a couple of unit tests along the way. I'm looking forward to a good finish with this project.

You can find the source for it here: https://github.com/KDE/kio-stash (did I mention it's now hosted on a KDE repo? ;) )

Paris, le 22 – 26 Août

En août offrez-vous une formation Qt en français avec un expert.

Apprenez les techniques de développement d’applications graphiques modernes, en utilisant la technologie Qt Quick (basée sur le langage QML) ainsi que la technologie objet Qt/C++.

“Mon équipe C++ a été ravie de cette formation. J‘espère pouvoir implémenter Qt dans nos applis ASAP.” CGG Veritas, Massy, France

Découvrir plus!

Voyez autres retours clients.


The post Programmation Qt Quick (QML) appeared first on KDAB.

July 20, 2016

Show Hosts

Ovidiu-Florin Bogdan

Rick Timmis

Aaron Honeycutt

Show Schedule


What have we (the hosts) been doing ?

  • Aaron
    • Working a sponsorship out with Linode
    • Working on uCycle
  •  Rick
    • #Brexit – It would be Rude Not to [talk about it]
    • Comodo – Let’s Encrypt Brand challenge https://letsencrypt.org//2016/06/23/defending-our-brand.html#1

Sponsor 1 Segment

Big Blue Button

Those of you that have attended the Kubuntu parties, will have seen our Big Blue Button conference and online education service.

Video, Audio, Presentation, Screenshare and whiteboard tools.

We are very grateful to Fred Dixon and the team at BigBlueButton.org. Go check out their project.

Kubuntu News

Elevator Picks

Identify, install and review one app each from the Discover software center and do a short screen demo and review.

In Focus

Joining us today is Marius Gripsgard from the UbPorts project.


Sponsor 2 Segment


We’ve been in talks with Linode, an awesome VPS with super fast SSDs, Data connections, and top notch support. We have worked out a sponsorship for a server to build packages quicker and get to our users faster. BIG SHOUT OUT to Linode for working with us!

Kubuntu Developer Feedback

  • Plasma 5.7 is unlikely to hit Xenial Backports in the short term, as it is still dependent on QT 5.6.1 for which there is currently no build for Xenial.
    There is an experimental build the Acheronuk has been working on, but there are still stability issues.

Game On

Steam Group: http://steamcommunity.com/groups/kubuntu-podcast

Review and gameplay from Shadow Warrior.


How to contact the Kubuntu Team:

How to contact the Kubuntu Podcast Team:

You might remember that I spoke about Plasma’s Publictransport applet getting some reworking during the summer. It’s been over a month since I made that announcement on my blog and while ideally, I’d have liked to have blogged every week about my work, I haven’t really been able to. This is largely down to the&ellipsisRead the full post »

We shouldn’t look like a bunch of old geekircs all the times, and new users and new programmers doesn’t really like to use IRC, it’s just too text-based for them.telegram

That’s why the KDE Irc channel now has a bot that will forward all messages to our Telegram Channel and vice-versa, this way all the new cool kids can talk to all the old geeks around and continue to make the KDE awesome in their platform of choice.

Thanks for the KDE Sysadmin Team for making this possible.

July 19, 2016

[Hope That Helps]community-working-group

The KDE community working group exists for quite a while, helping develop the KDE community from behind the scenes.

Since we have just too many different cultures working together, from hundreds of different countries (literally), it’s a bit hard to find a common ground on what’s a polite and correct behavior to have in many situations, We have Germans, Brazilians, Indians, Mexicans, Northamericans, Canadians, Romenians, Kenians, and quite a lot of them don’t know how the correct behavior is, we know what we should be polite, and we should treat each other with respect, but what’s respect in one’s point of view may not be respect in another point of view.

And that’s why we exists – If you think someone from the community is abusing, please talk to us, we will be just like your mom and dad asking the kids to hug and not fight anymore.

you can reach us in community-wg at kde dot org

Also, people, Follow the rules.


Decorated booth.

During the last week happened another edition of FISL, the Free Software International Forum, which is held since 2000 in the city of Porto Alegre, Rio Grande do Sul, Brazil. Our participation, as I had already announced here, was very special because we celebrate 20 years of the KDE community this year. The birthday it is only in October but as we could not pass up the opportunity to celebrate this date in such an important event as the FISL, we have prepared a special program.

On the first day it happened our mini-event, the Engrenagem (“Gear” in English), in which members of our community presentend several talks on various issues related to KDE. The Engrenagem was opened by David Edmundson talk, one of the Plasma developers.



My talk was the next. Its title was “20 anos de KDE: de Desktop a Guarda-Chuva de Projetos” (20 years of KDE: From Desktop to Project Umbrella). I presented the evolution process of our community, which led it from a desktop project to a incubator community. For those who did not attend the event the talk was recorded and it is available here. Below I also make available the slides of my presentation:

In addition to our talks, we have also prepared some surprises for the community of fans who showed up there. On the penultimate day of the event we had a special time in which we decorate our booth with balloons and a few other things, completing with birthday cake and candles! ❤


Our cake!




Who has not seen our talks can watch them here, searching by the “Engrenagem” term. All talks were recorded.

Who want to check out our photos at FISL, just visit our flickr:)

July 18, 2016

First of all I need help, but before you help me I’d like to show you what a user without development skills can do to make plasma better.

I already post about the system settings redesign and cause developer are busy with other tasks, I reviewed the existing modules and update them to fit (more) our vision. I know it’s not how I would prefere in the end but I did the changes without development skills (no compiling, no new code). I use qt-creator for edit the ui files and play around with qml.

The Mouse cursor theme was updated, by move the buttons to bottom as in most other kcm’s. The height of the resolution depandant button will be change soon. (left plasma 5.7 right 5.8)

03 cursor theme 03 cursor theme

Color scheme: Preview on top and buttons on bottom. I’d like to remove the tabs and add an edit button where the tabs will pop up. Therefore I need an developer.

05 color theme 05 color theme

Emoticon theme: same here rearrange buttons. In general I’m not sure that it is a good idea to have the posibility to add a new theme in the kcm cause start from an existing one and config the xml file would be easier.

07 emoticons 07 emoticons

I did also some other changes, but I’d like to show you that you can change something without development skills so join the game.

If you have dev skills and be interested in some work on system settings, contact me you can increase the user experience.

The first half of this year, I had the chance to work on icon and design for two big free-software projects.

First, I’ve been hired to work on Mageia. I had to refresh the look for Mageia 6, which mostly meant making new icons for the Mageia Control Center and all the internal tools.


I proposed to replace the oxygen-like icons with some breeze-like icons.
This way it integrates much better with modern desktop, and of course it looks especially good with plasma.


The result is around 1/3 of icons directly imported from breeze, 1/3 are modified versions and 1/3 are created from scratch. I tried to follow as much as possible the breeze guidelines, but had to adapt some rules to the context.


I also made a wallpaper to go with it, which will be in the extra wallpaper package so not used by default:

available in different sizes on this link.

And another funny wallpaper for people that are both mageia users and Pepper & Carrot fans:

available in different sizes on this link
(but I’m not sure yet if this one will be packaged at all…)

Note that we still have some visual issues with the applets.
It seems to be a problem with how gtkcreate_pixbuf is used. But more important, those applet don’t even react to clic in plasma (while this seems at least to work fine in all other desktop).
Since no one seems to have an easy fix or workaround yet, if someone has an idea to help…

Soon after I finished my work on Mageia, I’ve been hired to work on fusiondirectory.
I had to create a new theme for the web interface, and again I proposed to base it on breeze, similar to what I did for Mageia but in another different context. Also, I modified the CSS to look like breeze-light interface theme. The result theme is called breezy, and is now used by default since the last release.


I had a lot of positive feedback on this new theme, people seem to really like it.

Before to finish, a special side note for the breeze team: Thank you so much for all the great work! It has been a pleasure to start from it. Feel free to look at the mageia and fusiondirectory git repositories to see if there are icons that could be interesting to push upstream to breeze icon set.

Every year I start to create a new book, every year I delete the book folder because I think it’s going into the wrong direction, and ths year is no different, I’m starting to write a book about Qt5 programming with C++11, I hope this time things can go different. And what I usually do is setup my LaTeX enviroment (kile, texlive, a few libraries and all that) – but I was hitting a UTF8 issue that \includepackage[utf8 or utf8-x][inputenc] didn’t solved… And if you are not well versed in Tex debugging things can go hairwire in just no time.

So I started to look at the different tex enviroments that are in use nowdays, I heard of xetex and luatex but never gave them a go, until now. I started to study ConTeXt because I heard that it’s what all the cool kids in the hood are using it, at first I was afraid, I was petrified. All the macros that I know by heart had changed, All the packages that I used to setup had gone, what I could do?

Then I downloaded the ConTeXt help package, started playing with it and then it hit-me.  The lslistings package didn’t worked, how I was going to write a programming book if I can’t do proper code-highlighting on the book?

t-vim is love, t-vim is life.

A package for ConTeXt that uses vim as code-highlithing and thus having around 500+ languages with code colorization already done for you. I feel in love andthe book now is starting to shape.

I hope I don’t delete it again this year. 🙂


If you are into LaTeX / TeX and never gave ConTeXt a try, please do.


In my last entry I introduced libotp. But this name has some problems, that people thought, that it is a library for one-time-passwords, so we renamed it to libmimetreeparser.

Over the the last months I cleanup and refactored the whole mimetreeparser to turn it into a self-contained library.



As a gerneral rule we wanted to make sure, that we only have dependecies in mimetreeparser, where we we can easily tell, why we need them. We end up with:

  • KF5::Libkleo is the dependecy we are not happy with, because it pulls in many widget related dependencies, that we want avoid. But there is light at the end of the tunnel and we will be hopefully switch in the next weeks to GpgME directly. GpgME is planning to have a Qt interface, that fulfills our need for decrypting and verifying mails. The source of the Qt interface of GpgME is libkleo, that's why the patch will be get quite small. At KDEPIM sprint in Toulouse in spring this year, I already give the Qt interface a try and made sure, that your tests are still passing.
  • KF5::Codecs to translate between different codes, that can occur in a mail
  • KF5::I18n for translations of error messages. If we want consistent translations of error messages we need to handle them in libmimetreeparser.
  • KF5::Mime because the input mail is a mimetree.

Rendering in Kube

In Kube we have decided to use QML to render mails for the user, that's made it easy to switch all html rendering specific parts away. So we end up with just triggering the ObjectTreeParser and create a model out of the resulting tree. The model is than the input for QML. QML now loads different code for different parts in the mail. For example for plain text it just shows the plain text, for html it loads this part in a WebEngine.
But as a matter of fact, the interface we use is quite new and it is currently still under development (T2308). For sure there will be changes, until we are happy with it. I will describe the interface in detail if we are happy with it. Just as sidenote, we don't want a separate interface for kube and kdepim. The new interface should be suitable for all clients. To not break the clients constently, we keep the current interface and develop the new interface from scratch and than switch we are happy with the interface.

Kube rendering

Rendering in Kmail

As before we use html as rendering output, but with the rise of libmimtreeparser, kmail uses also the messageparttree as input and translate this into html. So we also have here a clear seperation between the parsing step ( handled in libmimetreeparser) and the rendering step, that is happening in messageviewer. Kmail has additional support for different mime-types like iTip (invitations) ind vCard. The problem with these parts are, that they need to interact directly with Akonadi to load informations. So we can actually detect if a event is already known in Akonadi or if we have the vCard already saved, which than changes the visible representation of that part. This all works because libmimetreeparser has an interface to add additional mime-type handlers ( called BodyPartFormatters).

And additionally messageviewer now using grantlee for creating the html, that is very handy and makes it now easy to change the visual presentation of mails, just by changing the theme files. That should help a lot if we want to change the look-and-feel of email presentation to the user. And allow us additionally to think about different themes for emailpresentation. We also thought about the implications of the easy changeable themes and came up with the problem, that it shouldn't be that easy to change the theme files, because malicious users, could fake good cryptomails. That's why the theme files are shipped inside resource files.

Meanwhile I was implementing this, Laurent Montel added javascript/jQuery support to the messageviewer. So I sat down and created a example to switch the alterantivepart ( html and textpart, that can be switched) with jQuery. Okay we came to the conclusion that this is not a good idea (D1991). But maybe others came up with good ideas, where we can use the power of jQuery inside the messageviewer.

Alternative switcher 1

Alternative Switcher 2

July 17, 2016

Hey, everything ok?

In this last week happened in a cold city called Porto Alegre, in Rio Grande do Sul in Brazil, the 17 edition of FISL, the Free Software International Forum.

Well… KDE has always participated in this forum, and the organization gave to us all day in a room, so the KDE Community could make a lot of talks.

I went there to talk about Stubbornness, Campus Party, and KDE, with the goal of sharing my experience about how this 3 things together changed my life. The point of this talk was made new persons get to know KDE, and encouraging them to go to(more) events. Because events/forums can change lives. Like changed my own. In my University, almost half of the students don’t care in go to events, and on this technology events, you will find what the company wants to share, and what skills you need to learn/know to work with them. And was in the Campus Party Sao Paulo 2015 that I saw that, and have my trigger to start to do something outside the college. Was a small talk that I did in 20 minutes, and got positive and bad feedbacks about it.


Also, I participate in another talk, with some friends, with the goal to share the pros and cons about the future of Drones and 3DPrinting. Since technology evolves really fast, and the IA that we are programming are getting smarter than never, we should think about the consequences of this technology. With 3DPrinting, a Drone could auto-replicate you, and depending on how it’s used, we can have a lot of bad things happening. We discussed a little bit about the ethics behind the use of Drones, and how that can affect the world that we live. Anyone with a 3DPrinter and some knowledge about robotics can build a Drone. In Brazil, we already had some issues about the use of Drones, and that started a discussion about regulation of use. An air space that the Drone need to be, so that cannot affect other things like airplanes and helicopters. Must be a regulation? Maybe. I need to study more about the subject so I can say more about it.

28223722812_a5b89bc434_k 28223723282_5cf4052dc6_k 28223722182_1428a10b13_k 28223721852_feba0a48bd_k 28223723692_fa622746eb_k

Also, KDE is making 20 years! YAY And we make a special party in our stand to celebrate this special BDay!


We had this *awesome* cake!

And balloons!

And was so great! Was a great time with the community of KDE Brazil and people that use KDE in day by day sharing with us this *awesome* day!

And yesterday, a friend of mine, Elias Silveira, publish in Facebook this draw of Konqi, that he did it using Krita 3.0! He is so skilled in draw things in Krita, that I get amazed every time that I saw a work of him in my feed. You can see Elias works here.


Well… 20 years of KDE, and I’m almost one year in this community, and I’m very thankful for being part of it. Thanks, KDE Brazil for the support, and all the community for giving me the chance to be a KDE contributor!

Happy Birthday, KDE! \o/


Late summer brings a couple of interesting dates for the Marble community: On the Desktop we’ll release Marble 2.0 and around the same time our Android app Marble Maps will have its first stable release. Later on in September it’s time to celebrate the 10th birthday of the Marble project!

The common theme to the upcoming release is the introduction of the Vector OSM map: A beautifully styled map based on data from the OpenStreetMap project that spans the entire world from globe to street zoom level. In order to make this possible we’re working very hard behind the scenes to optimize both the tile data and the rendering in Marble to give you a smooth experience.

This weekend brought some good news for the tile data generation: The awesome KDE sysadmins (thanks, Ben!) setup a beefy server for us which is now busy generating vector tiles for larger areas. Together with a second server that has plenty of space to host data we now have great infrastructure to scale Vector OSM.

The tool chain running on the tile-builder server is subject to continuous improvements over the next weeks in order to optimize the tile generation process: On the one hand tiles need to be generated as fast as possible (to allow us updating them with fresh OSM edits regularly), on the other hand the generated data needs to be filtered, simplified and optimized (to reduce processing time during rendering).

The alpha version of Marble Maps integrates the Vector OSM map theme already. OSM bitmap tiles are activated by default though, so you have to enable vector rendering explicitly. This choice was made deliberately because the rendering performance in intermediate zoom levels is still far from what we want it to be. We hope to be able to resolve that in the next two weeks.

Keeping that in mind, once Vector OSM is activated you’ll notice that low zoom levels (where you see the entire globe) are already completely covered by vector tiles thanks to our GSoC student Akshat. Street level is available for selected cities and, thanks to tile-builder, is expanding to country level now. Germany and California will be the first to finish soon.



Hey ! I’m making KDE Now, an application for the Plasma Desktop. It would help the user see important stuff from his email, on a plasmoid. It’s similar to what Google Now does on Android. To know more, click here


Another week has passed. And I’m back with more updates of what I’ve done over last week. I worked on the plasmoid side of things this time. Andres(Andy) Betts at the KDE-VDG was very generous and had given me some nice mockups for Event and Hotel cards.

I wrote the UI of these two cards based on the mockups I got. It was all in QML and went like a breeze. Next up, I worked on the plasmoid to make sure, that the cards get loaded dynamically in the correct way. It was an earlier commit but I figured out, it was not completely correct.

I also made some improvements on the daemon side of things. Mainly, devised a way to load the extractor plugins just once and set different map data. Earlier the plugins were getting loaded every time an update over email was received. My debugging skills were put to test, trying to find this one out. Along with some Database issues, I’m facing while the daemon is started by the plasmoid (over D-Bus). These two bugs coupled together were not letting the cards generate. I haven’t yet made the Database fix yet but that’s because it is low on my priority list. Also, some things are going in my mind regarding it, so I’ll rather postpone it.

Finally, a working model is ready. Well sort of. Not for the end user. But for the developer. Checkout the screenshots –

dark light1 dark2 light2



There are two cards here, Event and the Hotel Card. Both of them were dynamically generated from test emails. The first one (Hotel Card) was generated on the plasmoid, after I pinned the plasmoid to the desktop. The lower one (Event Card) was generated after I sent another test email, while the plasmoid was still on the desktop. This one utilised the IDLE feature of the IMAP client daemon FTW !

Things look good from here. Well I must say, I got a morale boost when I saw those cards “magically” generating for the first time. You see, I’ve been working for this a long time. And all I could see was just the debug output of the daemon. While that is informative and all, but that is not what I intended in the first place. Seeing the results made me very happy and I guess I even announced it on an IRC channel😄

For the next part, I’ll be working on Flight and Restaurant cards. Now I don’t have mockups for those. But I know they’ll be along the lines of what we have right now. It could take me sometime though, to come up and finalize the UI for them. Also, I have to find a way to get the user details. I’ve been testing with hard-coded values. But this has to change ofcourse.

Thanks for reading.

See you later !

I think it’s slowpoke time to present my project finally. Better late than never. :)

I have finished a skin for Mediawiki:

Neverland   Neverland(1)

I dont feel it like a Mediawiki skin for some reasons. But it works.

When I was working with Mediawiki, I realized its source code isnt good. My mentor said:

hence i wanted you to see it:)

But Wikipedia is one of the biggest websites in the world and it’s using Mediawiki.

There are some awkward moments when working with MW. Here is an example:

<?php foreach ( $this->getPersonalTools() as $key => $item ): ?>
<?php echo $this->makeListItem( $key, $item); ?>
<?php endforeach; ?>

Source (I modified a little bit)

It renders something like this:

<li id="pt-userpage">
    <a href="/mediawiki/index.php/User:Minhchu" class="new" dir="auto" title="Your user page [Alt+Shift+.]" accesskey=".">Minhchu</a>
<li id="pt-mytalk">
    <a href="/mediawiki/index.php/User_talk:Minhchu" class="new" title="Your talk page [Alt+Shift+n]" accesskey="n">Talk</a>
<li id="pt-preferences">
    <a href="/mediawiki/index.php/Special:Preferences" title="Your preferences" class="">Preferences</a>
<!-- ... -->

I wanted to add style to <li> tags. I checked the wiki. Nothing here so I looked into source code mediawiki/includes/skins/BaseTemplate.php line 391. So I tried:

<?php foreach ( $this->getPersonalTools() as $key => $item ): ?>
<?php echo $this->makeListItem( $key, $item, ['link-class' => 'header__actions--item']); ?>
<?php endforeach; ?>

But it rendered:

<li id="pt-userpage">
    <a href="/mediawiki/index.php/User:Minhchu" class="new header__actions--item" dir="auto" title="Your user page [Alt+Shift+.]" accesskey=".">Minhchu</a>
<li id="pt-mytalk">
    <a href="/mediawiki/index.php/User_talk:Minhchu" class="new header__actions--item" title="Your talk page [Alt+Shift+n]" accesskey="n">Talk</a>
<li id="pt-preferences">
    <a href="/mediawiki/index.php/Special:Preferences" title="Your preferences" class="header__actions--item">Preferences</a>

What !? I feel like something was wrong here. The class header__actions--item went inside the <li> tag not for <li> itself. So I had to read the source code again. Finally I came up with a hacky-way:
<ul class="header__categories">
    <?php foreach ( $this->getPersonalTools() as $key => $item ): ?>
        <?php $item['class'] = 'header__actions--item' ?>
        <?php echo $this->makeListItem( $key, $item, ['link-class' => 'header__categories--link'] ); ?>
    <?php endforeach; ?>

It’s awkward but it does work.

There are still some steps to finish Mediawiki builder. Now I have to deal with the documentations for Neverland since Gsoc mentioned it as requirement.

July 16, 2016

Determined to find a rational web based development environment, something stable, that works, has support and is flexible enough to deal with future demands, as well has minimal html involved, I ended up using Angular 2 on a Meteor base.

Yes it is alpha, now rc, not yet released. That hasn't caused many problems, as I'm early and still banging out the basic structure and development strategy.

A few things I've figured out.

  1. Javascript on browser has all the characteristics of multithreaded programming without the tools and api to deal with it. Once you get above a certain level of complexity it falls over a cliff, and your application can easily do the same. The frameworks such as Angular and React are an attempt, very good ones, at imposing order on the chaos. 
  2. Tracking down timing and essentially race conditions takes hours of time, time I don't have. Squiggly brackets nested deep are the devils work.
  3. Angular 2 is quite remarkable in it's power and simplicity. I have found that if I have something that requires figuring out some deep in the weeds function of the framework or platform, I'm making a mistake. Back out of the hole, and I find that there is a two line call that does exactly what I need. The paths are well trodden, almost all the issues I'm figuring out have been already, so learn the framework. There are bugs of course, and a few shortcomings, but each release candidate seems to sort a few of them out.
  4. ngrx/store and /effects is very interesting. It takes the ideas of Redux, adds Observables and ties it all to Angular 2. The idea of Redux, having the state defined and stored, changed by dispatching an event, having reducers which are pure functions to change the state, is brilliant in it's simplicity. It is similar to an event handler in gui application frameworks, but different. 
  5. It's power is in what it forces you as a developer to do. Start by defining any and all of the states your application will enter into. Each component will have different states, describing all the stages of fetching, waiting for, showing, editing the data. Each state gets set in a reducer function. This forces a level of simplicity, even the pure functions and the secondary effects get separated, again forcing you as the developer to break the process down into discrete steps that can be represented by a data structure containing the state. It seems repetitive; my mind is screaming at the almost identical functions, but keeping the simple structure makes it easier to find mistakes and bugs. And ngrx/store along with other redux implementations allow you to capture a step and return to it to sort out bugs.
  6. Observables are strange, confusing, wonderful and powerful. We tend to think of our applications as a point in time being represented on screen. Something happens, we react to it, getting to the state at another point in time. Observables describe a stream of data. The stream might have only one piece, or it may have many. My contact information for a customer streams forth from the data source, into an observable which is displayed. If there is a change from any source, the change flows the same way, showing the change on the display. That flow can be altered in many ways with the Observable methods. The flows go the other way as well; editing, keystrokes, selections, all the user input flows, is altered or used in some way leading to a change in some aspect of the state. 
  7. All this stacked up with discrete state reducers and observables exposing the state makes for traceable flows of events and data that can be debugged and fixed. One state of each component is when there is no data, gracefully handled it removes the source of lots of bugs. Each error state can be defined as well allowing for graceful failure.
I've set some parameters for the application. Everything is autosaved. It is a design parameter that in any and all situations the user could leave and come back without losing data and easily getting back to where they were. The well defined states are really helpful here. There are some instances where some distinct confirmation is required; an invoice posted, sending something. That is a different action from saving the state.

A second one is that data entry is a pain in the neck. Some of it needs doing, but it should be only for situations where the experience and thought of the user is required. Otherwise the idea is to present the user with decisions that need to be made, data that needs to be entered, and a place or slate where the experience and knowledge of the user gets applied.

A third one is to whenever possible have the data structured and entered in a way that doesn't require distinct fields. Sometimes it is necessary, but that should be determined by the data and it's use rather than the application structure. Something entered once with care and verified, then reused by selection or other methods reduces the scope for error. The characteristics of our business is that much of the time careful data entry is impossible or impractical, so the idea is to choose times or contexts where careful data entry is done, the rest of the time isn't necessary.

I'm enjoying Material design, which is in alpha development for Angular 2. It makes it trivial to construct nice looking and smooth operating applications, and it enforces a certain simplicity and elegance that makes an application useful.

Older blog entries

Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.