October 01, 2016

Existen muchos servicios en la nube (Dropbox, Mega, etc), nunca está de más tener acceso a alguno más. Es por ello que alegra compartir con vosotros que ha sido lanzado KIO GDrive 1.0, un servicio para aplicaciones Plasma con el que tendremos todas la facilidades para trabajar con nuestros documentos en línea.

KIO GDrive 1.0, tu disco duro de google en KDE

A falta de un cliente para Plasma de GDrive, se nos presenta una gran alternativa llamada KIO GDrive con la que podremos utilizar de forma transparente nuestro disco duro en la nube en nuestras aplicaciones KDE utilizando el protocolo KIO.

De esta forma, tenemos una vista de nuestros documentos online, los cuales podremos editar en cualquier momento. Claro está que este nuevo disco duro virtual no ocupará espacio en nuestro disco duro local, pero evidentemente dependeremos de una conexión permanente a la red para acceder a ellos.

Como ejemplo, podemos editar un archivo de texto en Kate o recortar una imagen en Gwenview, y al guardarlos directamente se cargarán a la nube, quedando a disposición en cualquier otro dispositivo que esté conectado.

KIO GDrive también funciona con aplicaciones que no utilizan KIO pero en este caso aparecerá una ventana de diálogo pidiéndonos explícitamente si queremos subir la nueva versión.

 

La configuración es realmente sencilla. Una vez instalado KIO GDrive deberemos ajecutar el asistente que nos pedirá los datos de nuestra cuenta de Google.

Una vez realizada la autentificación  tendremos acceso a él mediante la dirección “gdrive:// URL”, los cuales nos mostrará la cuenta añadida y la posibilidad de añadir más. Además, se puede mostrar como si fuera una carpeta más de nuestro sistema, apareciendo en la panel de Lugares.

KIO GDrive 1.0, tu disco duro de google en KDE

Para su instalación os recomiendo utilizar los siguientes enlaces:

Un gran servicio que se va a convertir un imprescindible para aquellos que tenemos que utilizar servicios de la gran G.

Más información: aelog

September 30, 2016

I’m very excited to let everyone know that as of tomorrow I’ll officially be joining Blue Systems, working full time on KDE and related projects! The chance to really immerse oneself completely into something you love and also work alongside people you absolutely respect is mind-blowing.

I would like to deeply thank Blue Systems for this opportunity, I hope my contributions will match the awesome generosity that everyone in the community has given allowing me do this. Thank you!


Ya casi acaba septiembre, un mes duro donde los haya en mi vida laboral pero que este año he podido, afortunadamente, llevar de una forma más que digna. No obstante, en ocasiones alguna noticia debe colarse entre las programadas para que el blog tenga algo de actualidad. Es por ello que me complace compartir con vosotros que ha sido lanzada la segunda beta de Kubuntu 16.10, una versión muy esperada de esta veterana distribución que seguro quiere resarcirse de las críticas que tuvo la anterior.

¿Qué es una versión beta?

Para los que estén aterrizando en el mundo del desarrollo del Software, una beta no es más que una versión de pruebas de una distribución (o de una aplicación). La idea de los desarrolladores es hacerla pública para que el máximo número de probadores trabajen con ella, detecten errores y los mandes a los desarrolladores, los cuales se dedicarán a solucionarlos antes del lanzamiento definitivo.

Es por ello que no está recomendado su uso a los usuarios finales, los cuales suelen preferir que sus aplicaciones o sus distribuciones sean lo más estables posible.

Lanzada la segunda beta de Kubuntu 16.10

Lanzada la segunda beta de Kubuntu 16.10Con el calendario marcado para el 13 de octubre la gente de Kubuntu se está afanando en pulir esta segunda beta que apareció el pasado 28 de septiembre y que requiere la colaboración del máximo número de beta testers.

Os podéis descargar la segunda beta de Kubuntu 16.10 desde este enlace y reportar los errores en aquí para la versión de 32 Bit users y aquí para la versión de 64 Bit.

Estoy seguro que esta nueva versión vendrá cargada de novedades y mucho más pulida que la anterior. El nuevo equipo de desarrollo de la distribución seguro que habrá aprendido de los error.

Y, para finalizar, un pequeño vídeo de ubuntu made simple donde la vemos en funcionamiento.

Más información: Kubuntu

In free software there’s a disappointing number of licences which are compatible in some cases and not in others.  We have a licence policy in KDE which exists to try to keep consistency of licences to ensure maximum re-usability of our code while still ensuring it remains as free software and companies can’t claim additional restrictions which do not exist on code we have generously licenced to them.

Our hero and (occasional chauvinist god character) Richard Stallman invented copyleft and the GNU GPL to ensure people receiving Free code could not claim additional restrictions which do not exist, if they did they lose the right to copy the code under that licence.

An older class of licence is the Permissive Licences, these include the BSD licence, MIT licence and X11 licences, each of which have multiple variants all of which say essentially “do whatever you like but keep this copyright licence included”.  They aren’t maintained so variants are created and interpretations of how they are applied in practice vary without an authority to create consensus.  But they’re short and easy to apply and many many projects are happy to do so.  However there’s some curious misconceptions around them.  One is that it allows you to claim additional restrictions to the code and require anyone you pass it onto to get a different licence from you.  This is nonsense, but it’s a myth which is perpetrated by companies who want to abuse other people’s generosity in licences and even by groups such as the FSF or SFLC who want to encourage everyone to use the GNU GPL.

Here’s the important parts of the MIT licence (modern variant)

Permission is hereby granted...
to deal in the Software without restriction...
subject to the following conditions:
The above copyright notice and this permission notice shall be include

It’s very clear that this does not give you licence to remove the licence, anyone who you pass this software on to, as source or binary or other derived form, still needs to have the same licence.  You don’t need to pass on the source code if it’s a binary, in which case it’s not free software, but you still need to pass on this licence.  It’s unclear if the licence is for patents as well as copyright but chances are it is.  You can add your own works to it and distribute that under a more restricted licence if you like, but again you still need to pass on this licence for the code you received it as.  You can even sublicence it, make a additional licence with more restrictions, but that doesn’t mean you can remove the Free licence, it explicitly says you can not.  Unlike the GPL there’s no penalty for breaking the licence, you can still use the licence if you want and in theory the copyright holder could sue you but in practice it’s just a lie and nobody will call you out and many people will even believe your lie.

Techy lawyer Kyle E. Mitchell has written an interesting line by line examination of the MIT licence which it’s well worth reading.  It’s a shame there’s no authority to stand up for these licences and most people who use such licences do so because they don’t much are about people making claims over their code.  But it’s important that we realise it doesn’t allow any such claims and it remains Free software no matter who’s servers it happens to have touched on its way to you.


I’m currently proposing some updates to the KDE licencing policy.  I’d like to drop use of the unmaintained FDL in docs and wikis in favour of Creative Commons ShareAlike Attribution 4.0 which is created for international use, well maintained, and would allow sharing text into our code (it’s compatible with GPL 3) and from Wikipedia and other wikis (which are CC 3).  Plus some other changes like allowing AGPL for web services.

Discussion on kde-community mailing list.

Diff to current.

 

Facebooktwittergoogle_pluslinkedinby feather

I’m happy to finally announce the first stable release of KIO GDrive. KIO GDrive enables KIO-aware applications (such as Dolphin, Kate or Gwenview) to access and edit Google Drive files on the cloud.

Given the lack of an official Google Drive client for Linux, KIO GDrive can be used as replacement for managing your Drive files with Dolphin. Even better, you don’t have to use space on your disk! The files are still in the cloud, yet you can edit them as if they were locally stored on your machine.

For example you can edit a text file in Kate or crop an image in Gwenview, and just save those files as you normally would. The edited file will be automatically uploaded on the cloud. This will also work with non-KIO applications, for example Libreoffice, but in this case a dialog will explicity ask if you want to upload the new version of the file.

Dolphin integration is provided by a Desktop file. Just open your menu and look for “Google Drive”:

Google Drive Dolphin Integration

The first time you will be asked to provide your Google account (this will hopefully change if we manage to implement single-sign-on via KAccounts):

Google Drive Login

Once you are authenticated, the Desktop entry will just open Dolphin with the gdrive:// URL already set, which shows your accounts and allows you to add new ones.

Google Drive Accounts

When you click an account, you can browse the account files and manage them, as the following screeshot shows.

Google Drive Dolphin

Bugs

This is the first stable release, but bugs might still be out there. If you find a bug, please report it!

Weekly mobile IMG update,

Which brings in following

  • Updated kirgami
  • Fixes for contact export from Google account. To add accounts from Google,
    • first set the date.
      /system/bin/date -s YYYYMMDD.HHMMSS
    • Open Settings -> Accounts -> Add account -> Google
    • Login into your Google account and authenticate.
    • Once authentication is done, it should import your contacts automatically.
    • In Dialer app -> Contacts, you can view your contacts

z0crtvp

  • Fixes for SSL errors in discover

You can flash using instructions at https://plasma-mobile.org/nexus-5/

September 29, 2016

Hay que reconocerlo, KDE no tiene un editor de imágenes vectoriales que convenza: Karbon no está a la altura y Krita, que es una magnífica aplicación gráfica, no está pensada para editar este tipo de creaciones. Así que no tenemos más remedio que utilizar una alternativa que ha demostrado estar más que preparado para realizar tareas semiprofesionales. Por eso me complace compartir con vosotros un curso de Inkscape en 10 vídeos que nos puede ayudar a empezar a dominar esta fantástica aplicación.

Curso de Inkscape en 10 vídeos

De la mano de Video Tutoriales 2.0 nos llega un serie llamada Ilustración Fácil con Inkscape, un video-tutorial que en 10 capítulos nos pretende explicar el funcionamiento de este asombrosa aplicación, un programa de edición de imágenes vectoriales ideal para la creación de dibujos, carteles, pósters, flyers o cualquier otro tipo de creación similar.

Curso de Inkscape en 10 vídeos

De momento no se han colgado todo el curso pero los temas tratados son bastantes interesantes. La lista de  capítulos publicados de hasta la fecha son los siguientes:

  1. Introducción. Donde obtener este programa libre y gratuito. (Nivel básico)
  2. Primeros pasos: espacio de trabajo, manejo de las principales herramientas, texto y color a nivel básico.
  3. Curvas de Bèizer
  4. Degradados: Profesionalizando nuestros trabajos.
  5. Las transparencias:
    1. Segunda parte Transparencias
  6. Color y trazo avanzado
  7. Soldar, cortar y manipular: como crear figuras a base de soldar y recortar elementos, así como a crear manipulaciones en las figuras regulares.

Y para muestrra, un botón…. digo un vídeo.

Como es habitual, os invito a visualizar los vídeos, compartirlos, utilizarlos para aprender y, sobre todo, si os ha gustado no olvidéis dar un Like y, si os animáis, realizar un comentario. Estoy seguro que en el canal Video Tutoriales 2.0 os lo agradecerán y será un importante incentivo para que terminen la serie de vídeos lo antes posible.

Será vuestra buena acción del día para la promoción del Software Libre.

Qt_Champion_200
It is the time of the year, when we can all take a moment to think who has been the most helpful, incredible, kind, skilled and friendly person in the Qt community.

Qt Champions is a way to recognize the individuals who strive to help the community most. They can be found in different places and doing different things, but they are all united in their drive to make Qt a friendly and helpful community.

Past Qt Champions include among others, Samuel Gaist, who has always been helpful and friendly on the forum, and has more posts than anyone else. Iori Ayane, who has written and translated several books into Japanese, and is a key person in the Japanese Qt scene. Robin Burchell, who is a long time Qt contributor doing excellent work in the code base. And Johan Thelin and Jürgen Bocklage-Ryannel, the authors of the QML book.

In addition to the recognition that the Qt Champion title brings, the champions get a full Qt license for a year,  invitations to next years main Qt event and naturally an awesome Qt Champion t-shirt!

So if you have ideas on who should be this years Qt Champion, head over to the Wiki to make your Qt Champion nominations!

The post Qt Champion nominations for 2016 now open appeared first on Qt Blog.




FOSS at Amrita presents you for the first time in India, MediaWikiToLearnHack-a-thon and Edit-a-thon in collaboration with Wiki To Learn andWikimedia Foundation at Amrita University, Amritapuri Campus.


Background

FOSSatAmrita, is a student community at Amrita University, Amritapuri Campus. Being one of the most active student communities on the campus, we encourage, support and mentor students to improve their technical and social skills through contributing to open source. We aim at encouraging students to embrace the free and open source approach that is fast gaining momentum. We are a community of students and mentors that support and guide each other towards making open source contributions, and towards the usage of free software. This year, FOSSatAmrita gave 10 selections in Google Summer of Code and 1 in Outreachy, that shows the potential in the students of FOSSatAmrita club.



To ensure more open source contributors, we are bringing this two day hackathon, for the very first time in the history of Amrita University, Amritapuri Campus. MediaWiki is organizing a two-day hackathon to help students understand the working of the MediaWiki and WikiToLearn community. Two days long hackathon would include every aspect — from installing the software, setting up the environment to fixing some easy bugs and submitting the patches for review. (Don’t worry if some of the words don’t make any sense now, we’re here to answer all your queries!)

The program will run from 01st-02nd October with the two parallel tracks:

MediaWiki Hackathon: Run by invited developers and existing MW contributors in the club. Expected participant count of 50, and all requests handled by the below mentioned Google Form. Participants will be given guidance and opportunity to code contribute to the Wikimedia codebase, sending in patch sets, and understanding the code review process. This would make them head start into upcoming Google Summer of Code 2017 and other internship opportunities.

WikiToLearn Edit-a-thon:Teachers and students get together to collaborate and develop open course content on en.wikitolearn.org. Teachers would participate by either editing or reviewing contents in the wiki. This will make it easy for teachers and students to have a structured method to communicate online over academic materials.
Added Benefit

Are you willing to apply for Google Summer of Code (GSoC) but don’t know how to start?

Well, that’s problem everyone faces at the onset of joining the open source communities.

Attending this workshop would give a clearer picture to anyone who’s willing to apply to the open source scholarship programs like Google Summer of Code (GSoC) and/or Outreachy but lacks direction or information. This workshop will leave you with a better understanding of how you may proceed with contributing to an organization of your choice, thus significantly boosting the probability of you getting selected! \o/\o/

Details

Date Saturday and Sunday, October 1 and 2, 2016 Time 10 AM to 8 PMVenue Ground Floor Lab, Amrita University, Amritapuri Campus Contact foss@am.amrita.edu Hashtag #MediaWikiToLearnHack Register Register here. IRC #mediawiki on FreeNode Report Photos Bug fixes and links to patches :
Facilitators
Abdeali JK
Tony Thomas
Abhinand N
Devi Krishnan
Srijan Agarwal
Registration

Fill this Google form to register yourself for the workshop. Since we have limited slots available, we’ll keep the event limited to the registered students.

Note on the registration Prerequisites: Please note that you are cloning the development master branch of mediawiki-core form gerrit.wikimedia.org, preferably over SSH. Please follow Gerrit Tutorialtopdown to complete in both the tasks, and in case of any questions, do ask in #wikimedia-dev or #mediawiki or contact the FOSS club at foss@am.amrita.edu
Hackathon Pre-requisite

To make sure all attendees are at the same phase, please ensure that you have the following setup in your machine:
Any working Linux environment, with LAMP server installed. You can find the installation steps for LAMP in Ubuntu here
Any powerful PHP IDE, PHPstorm recommended. You can find the instructions here.
An account in Wikitech, Gerrit, Wikimedia Phabricator, and Github
Try cloning and setting up Gerrit in your machine following the instructions given here
Try connecting with #wikimedia-dev on IRC freenode channel

Important: We are expecting the participants to have a clone of MediaWiki-core downloaded and installed in their machine before hand, so that we can start early with the contribution phase. You can find the detailed instructions on how to setup development environment for Mediawiki in theGerrit/Tutorial

In case you do not have any prior know-how of PHP, Version control or web applications : You are welcome to try setting up a simple registration and login web application in PHP beforehand, in your machine. You can find sample code for the same here. You will have to setup LAMP or similar stack in your machine to test the code locally.

Why Should You Attend?

Reading through the large open-source code base and contributing as code: Increasing their technical and code scripting skills, solving simple to complex real-time problems existing in Mediawiki software.
Motivated students reaching out and getting into internship opportunities like Google Summer of Code and Outreachy: Along with building a strong hold in technical stuff, internship programs like GSoC and Outreachy allow its interns to work on major problems, proposing and implementing their solution under expert mentorships. The selected students get a considerable stipend and its a major addition to the institutional record too.
Active community engagement and other opportunities The workshop would add to the technical know-how of the participants while working closely with a worldwide Wikimedia community, which is the fun part.
FAQs

What you need to know

You need to have some sort of prior experience developing stuff (web apps, mobile apps, data mungers, anything at all!). There are a variety of things you can do (CSS/JS hacks, webapps that use the API, Mobile apps, data mining) that pretty much anyone with some developer experience is bound to find something fun to do. This is a hackathon, and not a workshop.
Knowledge of Free Software License, comfortable using any Online Project hosting websites (like Github) since we would want you make the code you write for the hack freely available, online.
If you feel you might not have anything listed here, think about taking a look at these courses:
Learn PHP online with codecademy
Learn the command line with codecademy
Learn Git online with try.github
Learn bit of Python with learnpython
Go through this repo on Contributing to Open Source and see what you can make out of it.

What language can I use

Any programming language you are comfortable with and can make sense of the MediaWiki code base
Life would get easier if you have some prior programming experience in PHP since most of the MediaWiki code is written in PHP

How can I prepare for the Hackathon
Read through links in Resources, check out the Examples.
Feel free to ask questions! Contact us.

What qualifies me to come
Show us some code you have written, tell us what you know, tell us why you love hacking (and Wikipedia) in general, and you’ll get the pass :)
Please sign up here.

How to spread the word
Pass on the word to your fellow geek friends.
Use #MediaWikiToLearnHack on Twitter.
The brand new app scaffolding tool in our app store
Last night, Bernhard Posselt finished the app scaffold tool in the app store, making it easy to get up and running with app development. I was asked on twitter to blog about setting up a development environment, so... here goes.

What's simpler than downloading a zip file, extracting it and running a command in the resulting folder to get an Nextcloud server up on localhost for hacking?

Yes, it can be that simple, though it might require a few minor tweaks and you have to make sure to have all Nextcloud dependencies installed.

Note that this is useful if you want to develop an Nextcloud app. If you want to develop on the Nextcloud core, a git checkout is the way to go and you'll need some extra steps to get the dependencies in place, get started here. Feedback on this process is highly appreciated, especially if it comes with a pull request for our documentation of course ;-)

Step 1 and Two: Dependencies

  • Install PHP and the modules mentioned here
    Your distro should make the installation easy. Try these:
    • openSUSE: zypper in php5 php5-ctype php5-curl php5-dom php5-fileinfo php5-gd php5-iconv php5-json php5-ldap php5-mbstring php5-openssl php5-pdo php5-pear php5-posix php5-sqlite php5-tokenizer php5-xmlreader php5-xmlwriter php5-zip php5-zlib
    • Debian: apt-get install php5 php5-json php5-gd php5-sqlite curl libcurl3 libcurl3-dev php5-curl php5-common php-xml-parser php5-ldap bzip2
  • Make Nextcloud session management work under your own user account.
    Either change the path of php session files or chmod 777 the folder they are in, usually something like /var/lib/php (debian/SUSE) or /var/lib/php/session (Red Hat).

The Final Four Steps


Nextcloud should present you with its installation steps! Give your username and password and you're up and running with SQLite.

Start with the app

Now you create a subfolder in the nextcloud/apps with the name of your app and put in a skeleton. You can generate an app skeleton really easy: use the scaffolding tool, part of our new app store for Nextcloud 11!

It's probably wise to now get going with the app development tutorial here. This isn't updated for the scaffolding tool yet, so you'll have a head start here. Be sure to check out the changelog, we try to make sure the latest changes are noted there so even if we didn't manage to fully update the tutorial, you can find out what will and won't work in the changelog. Also, be sure to update the links to get the latest dev doc - this all links to 11, once that is out it is probably better to directly target 12 and so on.

Help and feedback

Your input is very much welcome! If you run through these steps and get stuck somewhere, let me know and I'll update the documentation. Or, of course better still, do a pull request on the documentation right in github. You don't even have to do a full checkout, smaller fixes can easily be done in the web interface on github.

Last but not least, ask questions on our forums in the app dev channel or on IRC. Here is the Nextloud development IRC chat channel on freenode.net, also accessible via webchat.

Thanks, good luck, and have fun building Nextcloud apps!

September 28, 2016

Kubuntu 16.10 beta has been published. It is possible that it will be re-spun, but we have our beta images ready for testing now.

Please go to http://iso.qa.ubuntu.com/qatracker/milestones/367/builds, login, click on the CD icon and download the image. I prefer zsync, which I download via the commandline:

~$ cd /media/valorie/ISOs (or whereever you store your images)
~$ zsync http://cdimage.ubuntu.com/kubuntu/daily-live/20160921/yakkety-desktop-i386.iso.zsync

UPDATE: the beta images have now been published officially. Rather than the daily image above, please download or torrent the beta, or just upgrade. We still need bug reports and your test results on the qatracker, above.

Thanks for your work testing so far!

The other methods of downloading work as well, including wget or just downloading in your browser.

I tested usb-creator-kde which has sometimes now worked, but it worked like a champ once the images were downloaded. Simply choose the proper ISO and device to write to, and create the live image.

Once I figured out how to get my little Dell travel laptop to let me boot from USB (delete key as it is booting; quickly hit f12, legacy boot, then finally I could actually choose to boot from USB). Secure boot and UEFI make this more difficult these days.

I found no problems in the live session, including logging into wireless, so I went ahead and started firefox, logged into http://iso.qa.ubuntu.com/qatracker, chose my test, and reported my results. We need more folks to install on various equipment, including VMs.

When you run into bugs, try to report them via "apport", which means using ubuntu-bug packagename in the commandline. Once apport has logged into launchpad and downloaded the relevant error messages, you can give some details like a short description of the bug, and can get the number. Please report the bug numbers on the qa site in your test report.

Thanks so much for helping us make Kubuntu friendly and high-quality.

yy-beta2-breezess

October 13 is coming up fast and we need testers for this second Beta. Betas are for regular users who want to help us test by finding issues, reporting them or helping fix them. Installing on hardware or in a VM, it’s a great way to help your favorite community-driven Ubuntu based distribution.

Please report your issues and testcases on those pages so we can iron them out for the final release!
For 32 Bit users
For 64 Bit users

Beta 2 download

Binding loops suck, and they can be hard to fix. I wrote a tool that prints a backtrace of the bindings being updated when a loop occurs. See link at bottom.

About:

QML bindings are a very fast and easy way to write a declarative UI. However it's quite easy to accidentally write an infinite loop.
This can happen if we bind propertyA to affect propertyB and also bind propertyB to affect propertyA, they would constantly update each other.

Consider the following example:

1 import QtQuick 2.0
2 
3 Rectangle {
4     width: childrenRect.width
5     Text {
6        text: parent.width > 10 ? "Hello World" : "Hi"
7     }
8 }

The Rectangle width changes on startup, that changes the text's size, which in turn changes the Rectangle's width. If this was undetected the application would loop forever and eventually crash.
QML prints a warning, and ceases processing, but it's an indication that something is wrong with the logic of your code, and it needs fixing.

However, whilst the loop here is obvious to spot, it can be considerably more complicated when looping through tens of bindings over many many components.

Creating a Tool

The problem with this warning is that on its own is rather unhelpful - trying to find the loop then becomes a manual task of tracing all possible combinations through every bindings that could lead to a short circuit. GDB on its own doesn't help as the C++ backtrace tells us absolutely nothing we can use.

I've created a small script that, using gdb, unwinds the backtrace detecting where the properties changed and then showing the QML code which is responsible.

Simply download here into $PATH and run with

"binding-loop-tracker.py myAppName"

In the case of the loop above we will see output like:

=====Binding loop detected - printing backtrace =====
#0 - file:///home/david/temp/binding_loop.qml:4:12
#1 - file:///home/david/temp/binding_loop.qml:6:15
#2 - file:///home/david/temp/binding_loop.qml:4:12

Which shows which line of QML was being updated when we hit the loop.

It still requires some manual work to follow the trace through, but it's a useful aid and has already helped me in two real world cases that I couldn't resolve manually.

Hello!

QtCon_logo

This is a small wrap-up from QtCon, the biggest Qt event in Europe in 2016, that happened at the beginning of September. At QtCon the Qt community joined forces with the KDE, FSFE and VideoLAN communities, to create an exciting event in the spirit of open collaboration and participation amongst projects.

During QtCon many KDAB engineers gave in-depth technical talks about Qt, QML, Qt3D, OpenGL and the other technologies around Qt development. All the sessions were of the highest quality, as you may expect from KDAB speakers, and extremely well received by the audience.

In case you missed some, here’s a complete list. You can find each talk’s description, slides, code / example material, and a recording of the session by following the links.

In no particular order:

See you at the Qt World Summit!

The post KDAB talks at QtCon 2016 appeared first on KDAB.

Starting with Qt 5.7, we added the ability to create Android services using Qt. In this article we’re going to see how to get started and also how to communicate between the two.

Before we get started I want to add a big bold WARNING about the performance! Because the services are run in the background for a very long time, make sure your service doesn’t drain the device battery!

Getting started

Step I: Extend QtService

Every single Qt Android Service must have its own Service java class which extends QtService, so the first step is to create such a service:

// java file goes in android/src/com/kdab/training/MyService.java
package com.kdab.training;
import org.qtproject.qt5.android.bindings.QtService;

public class MyService extends QtService
{
}

Step II: Add the service section(s) to your AndroidManifest.xml file

The next step is to add the service section(s) to your AndroidManifest.xml file. To do that you first need to copy & paste the template from https://wiki.qt.io/AndroidServices to your AndroidManifest.xml file, then set android:name attribute with your service class name, as shown in the following snippet:

<application ... >
  <!-- .... -->
  <service android:process=":qt" android:name=".MyService">
  <!-- android:process=":qt" is needed to force the service to run on a separate
                                                        process than the Activity -->

    <!-- .... -->

    <!-- Background running -->
    <meta-data android:name="android.app.background_running" android:value="true"/>
    <!-- Background running -->
  </service>
  <!-- .... -->
</application>

BE AWARE: Every single Qt service/activity MUST run in it’s own process! Therefore for each service you must set a different android:process attribute value.

Step III: How to start the service ?

Now you need to decide how to start the service. There are two ways to do it:

  • on demand
  • at boot time

We’re going to check them both:

Start the service on demand

This is the most common way to start your service(s). To start the service you just need to call Context.startService(Intent intent) method. The easiest way is to add a static method to your MyService:

// java file goes in android/src/com/kdab/training/MyService.java
package com.kdab.training;

import android.content.Context;
import android.content.Intent;
import org.qtproject.qt5.android.bindings.QtService;

public class MyService extends QtService
{
    public static void startMyService(Context ctx) {
        ctx.startService(new Intent(ctx, MyService.class));
    }
}

Then simply call it from Qt to start it:

QAndroidJniObject::callStaticMethod<void>("com/kdab/training/MyService",
                                              "startMyService",
                                              "(Landroid/content/Context;)V",
                                              QtAndroid::androidActivity().object());

Start the service at boot time

This method is used quite seldom and is useful ONLY when you really need to run the service at boot time, otherwise I do recommend you to start it on demand.

First you need to add android.permission.RECEIVE_BOOT_COMPLETED permission to your AndroidManifest.xml file:

<application ... >

  <!-- .... -->
  <uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED" />
</application>

Then you need to add a receiver element to your AndroidManifest.xml file:

<application ... >
    <!-- .... -->
    <receiver android:name=".MyBroadcastReceiver">
        <intent-filter>
            <action android:name="android.intent.action.BOOT_COMPLETED" />
        </intent-filter>
    </receiver>
    <!-- .... -->
</application>

And finally, you need to implement MyBroadcastReceiver class, as shown in the following snippet:

public class MyBroadcastReceiver extends BroadcastReceiver {
    @Override
    public void onReceive(Context context, Intent intent) {
        Intent startServiceIntent = new Intent(context, MyService.class);
        context.startService(startServiceIntent);
    }
}

Step IV: Where to put your Qt Service code?

Next you need to decide where you’re going to put your service code. Qt (and qmake) has two options for you:

  • in the same .so file with the application
  • in a separate .so file

We’re going to check them both:

Same .so for app & service(s)

Because you’ll have one big .so file, you need a way to know when it will run as an activity or as a service. To do that you just need pass some arguments to your main function. AndroidManifest.xml allows you to easily do that:

<service ... >
    <!-- ... -->
    <!-- Application arguments -->
    <meta-data android:name="android.app.arguments" android:value="-service"/>
    <!-- Application arguments -->
    <!-- ... -->
</service>

Then make sure you set the same android.app.lib_name metadata for both service(s) & activity elements:

<service ... >
    <!-- ... -->
    <meta-data android:name="android.app.lib_name"
                android:value="-- %%INSERT_APP_LIB_NAME%% --"/>
    <!-- ... -->
</service>

I recommend you to use this method only if your activity and your service(s) share a large piece of code.

Separate .so files for app & service(s)

The second option is to create separate .so files for your app & service(s). First you need to create a separate server .pro file(s):

TEMPLATE = lib
TARGET = server
CONFIG += dll
QT += core
SOURCES += \
    server.cpp

The server .so main entry is the main function:

#include <QDebug>

int main(int argc, char *argv[])
{
    qDebug() << "Hello from service";
    return 0
}

Last you need to load the server .so file:

<service ... >
    <!-- ... -->
    <meta-data android:name="android.app.lib_name" android:value="server"/>
    <!-- ... -->
</service>

Use QtRemoteObject for communication

We’ve seen how to create and how to start a Qt on Android service, now let’s see how to do the communication between them. There are lots of solutions out there, but for any Qt project, I do recommend you use QtRemoteObject, because it will make your life so easy!

QtRemoteObjects is a playground Qt module led by Ford, for object remoting between processes/devices:

  • exports QObjects remotely (properties, signals & slots)
  • exports QAbstractItemModels remotely
  • creates a replicant on the client side you can interface with
  • repc generates source & replica (server & client) source files from .rep files
    • .rep file is the QtRemoteObjects IDL (interface description language)

As you can see it’s very Qt specific!
Let’s see how to add it to your projects and use it.

Get QtRemoteObjects

QtRemoteObjects project is located at http://code.qt.io/cgit/playground/qtremoteobjects.git/, to get it you need to run the following commands:

$ git clone git://code.qt.io/playground/qtremoteobjects.git
$ cd qtremoteobjects
$ ~/Qt/5.7/android_armv7/bin/qmake -r && make && make install

If needed, replace ~/Qt/5.7/android_armv7 with your Qt version and android ABI of choice.

Use QtRemoteObjects

Using QtRemoteObjects is pretty easy, you need to do a few easy steps:

– add QtRemoteObjects to your .pro files

# ...
QT += remoteobjects
# ...

– create .rep file(s)

class PingPong {
    SLOT(void ping(const QString &msg));
    SIGNAL(pong(const QString &msg));
}

– add .rep file(s) to the server .pro file

# ...
REPC_SOURCE += pingpong.rep
# ...

– add .rep file(s) to the client .pro file

# ...
REPC_REPLICA += pingpong.rep
# ...

– QtRemoteObjects source(server) side implementation

#include <QCoreApplication>
#include "rep_pingpong_source.h"

class PingPong : public PingPongSource {
public slots:
    // PingPongSource interface
    void ping(const QString &msg) override {
        emit pong(msg + " from server");
    }
};

int main(int argc, char *argv[])
{
    QCoreApplication app(argc, argv);

    QRemoteObjectHost srcNode(QUrl(QStringLiteral("local:replica")));
    PingPong pingPongServer;
    srcNode.enableRemoting(&pingPongServer);

    return app.exec();
}

Let’s check the code a little bit.
First you need to implement all .rep interfaces (PingPongSource), then export PingPong object using enableRemoting.

– QtRemoteObjects replica(client) side implementation

#include "rep_pingpong_replica.h"

// ....
    QRemoteObjectNode repNode;
    repNode.connectToNode(QUrl(QStringLiteral("local:replica")));
    QSharedPointer<PingPongReplica> rep(repNode.acquire<PingPongReplica>());
    bool res = rep->waitForSource();
    Q_ASSERT(res);
    QObject::connect(rep.data(), &PingPongReplica::pong, [](const QString &msg){
        qDebug() << msg;
    });
    rep->ping("Hello");
// ....

Let’s check the code:

  • use QRemoteObjectNode to connect to QRemoteObjectHost
  • use QRemoteObjectNode:acquire to link the local object to the remote one
  • use the acquired object as its local (call slots, connect to signals, etc.)

As you can see, using Qt + QtRemoteObject is (much?) easier and more straight forward than Android’s Java services + AIDL 😉

Limitations

  • the activities & service(s) must run on a different process.
  • it is not possible (yet) to use QtCreator to easily add a service section to your AndroidManifest.xml file check QTCREATORBUG-16884
  • it is not possible (yet) to use QtCreator to easily generate a service subproject for us, check QTCREATORBUG-16885
  • it is not possible (yet) to see the services logs in QtCreator. You’ll need to use
     $ adb logcat 

    to see it, check QTCREATORBUG-16887

  • it is not possible (yet (hopefully)) to debug the services in QtCreator. This feature will take some time to implement it, therefore I’ll not hold my breath for it, check QTCREATORBUG-16886

Please use the above bug report links to vote for your favorite tasks, the ones that have more votes (usually) are implemented first!

You can find the full source code of this article here: https://github.com/KDAB/android

The post Qt on Android: How to create an Android service using Qt appeared first on KDAB.

We’ve got new builds for you! A new 3.0.1.1 stable release with a number of important bugfixes: the brush-resize lag is gone, templates are there again, loading brush tags works again, the sobel filter is fixed.

And we’ve got a 3.0.1.90 unstable beta1 release, which includes all the work done by Wolthera (soft proofing, new color dialog, color-managed color picker), Julian (a rewrite of Qt’s OpenGL 2D painting subsystem and modernization of Krita’s OpenGL canvas) and Jouni (interpolation and keyframing for layers and masks and their properties) for their Google Summer of Code projects, rendering to video, a whole new and very speedy brush engine, the first (slow) version of the lazy-brush coloring mask implementation and much, much more.

Because so much new code got into this beta, we’re probably going to have to spend more time than scheduled ironing out all bugs, so please do download and test these builds!

For OSX users, there is only 3.0.1.90 build: because 3.0.2 will be the first release that has the full range of OpenGL-related features, thanks to Julian’s Google Summer of Code work, it no longer makes sense to build versions of Krita that do not have this code.

We have a nasty leak in QFormLayout::setWidget(). In setWidget(), we create the QWidgetItemV2 corresponding to the passed QWidget, and pass that on to Private::setItem, which has a bunch of error returns (guard clauses) that do not delete the item, among them negative row index and cell already occupied.

We could easily fix that missing delete, but this function is also used by setLayout(), say, where the item is the layout.

Conceptually deleting that item (= the nested layout) is completely OK, because the user should be able to rely on the form layout to take ownership of the nested layout, without ifs and buts.

But then we have code in tst_qformlayout that breaks. Simplified, it reads:

   QFormLayout layout;
   QHBoxLayout l4;
   layout.addLayout(-1, QFormLayout::FieldRole, &l4);

I guess you spot the problem? If l4 had been added, everything would’ve been peachy: The QHBoxLayout destructor unregistered itself from layout, which does not attempt to delete l4 when it itself is deleted.

But if l4 is not added for some reason, like in the test code above, the fixed code will attempt to delete l4, which is undefined behaviour, of course (double-delete).

I think such broken code deserves to be broken, for the greater good of fixing a resource leak. Esp. since a double-delete should complain much louder than a leak, and the API user can do something about the double-delete while she can’t do anything about the leak (the pointer is not reachable from outside QFormLayout).

I feel uneasy about adding this to 5.6 LTS, though, so I’ll make a minimal fix there, just for setWidget().

What do you think about the larger fix?


Filed under: English, Qt

September 27, 2016

KDE Project:

While not being advertised on the KDE neon main page just yet (and it won't be for a while), we've recently begun doing regular builds of a special Korean Edition of neon's Developer Edition tracking the stable branch of KDE's code repositories. The Korean Edition pre-selects the Korean language and locale at boot, packs all the Korean translations we have and comes with a Korean input method pre-setup.

Hangeul metal type from the Joseon era
Joseon-era Hangeul metal type

Why a Korean Edition?

Among many other locations around the planet, the local community in Korea is planning to put on a KDE 20th Anniversary birthday party in Seoul on October 14th. The KDE neon Korean Developer Edition was directly created on request for this event, to be made available to attendees.

That said - this is actually something we've been wanting to do for a while, and it's not just about Korean.

None of the bits that make up the new image are new per-se; KDE has supported Korean for a long time, both with foundational localization engineering and regular maintenance activity. And as of the Plasma 5.6 release, our Input Method Panel is finally bundled with the core desktop code and gets automatically added to the panel on first logon in a locale that typically requires an input method.

Yet it's pretty hard to keep all of this working well, as it requires tight integration and testing across an entire stack, with some parts of the whole living upstream or downstream of KDE.org. For example: After we attempted to make the Plasma panel smarter by making it auto-add the Input Method Panel depending on locale, we couldn't actually be sure it was working as desired by our users, as it takes time for distros to get around to tuning their dependency profiles and for feedback from their users to loop back up to us. It's a very long cycle, with too many opportunities to lose focus or domain knowledge to turnover along the way.

This is where KDE neon comes in: As a fully-integrated product, we can now prove out and demo the intended distro experience there. We can make sure thing stay in working order, even before additional work hits our other distro partners.

Right now, we're kicking things off with Korean Edition, but based on time, interest and testers (please get in touch!), we'd like to build it out into a full CJK Edition, with translations and input support for our Chinese and Japanese users pre-installed as well (as another precursor to this, the decision to switch to Noto Sans CJK as Plasma's default typeface last year was very much made with the global audience in mind as well).

Ok, but where do I get it?

Here! Do keep in mind it's alpha. ☺

Today the first feature update of Kirigami has been released.
We have a lot of bug fixes and some cool new features:

The Menu class features some changes and fixes which give greater control over the action triggered by submenus and leaf nodes in the menu tree. Submenus now know which entry is their parent, and allow the submenu’s view to be reset when the application needs it to.
The OverlaySheet now allows to embed ListView and GridView instances in it as well.
The Drawer width now is standardized so all applications look coherent from one another and the title now elides if it doesn’t fit. We also introduced the GlobalDrawer.bannerClicked signal to let applications react to banner interaction.
SwipeListItem has been polished to make sure its contents can fit to the space they have and we introduced the Separator component.
Desktop Kirigami applications support the “quit” shortcut (such as Ctrl+Q) now.

Plasma 5.8 will depend from Kirigami 1.1, so if you are planning to write a Kirigami-based application, it will work by default and nicely integrate in the Plasma 5.8 desktop.

Plasma 5.8 also has a new big user for Kirigami 1.1, that is Discover: the application to search and install for new software, has a brand new user interface, based upon Kirigami.

plasma-5-8-discover

This is problably the last feature release based upon QtQuickControls 1, QtQuickControls 2 version is on the way at an experimental stage. The port will have way simpler code (and smaller memory footprint) but this is an entry for another day 🙂

At froglogic, we’re big fans of open source software. A large part of our engineering (and management!) staff contributed or contributes to open source projects, and everyone visiting our offices for a job interview certainly gets a big +1 in case she can show off some open source work! We also use a lot of open source software for our daily work, ranging from obvious projects like Git or the Linux kernel to individual libraries serving very specific purposes; the Acknowledgements Chapter of the Squish manual gives an impression of how tall the giants are upon whose shoulders we’re standing.

Over the last couple of years we contributed back various bug fixes and improvements to different projects we’re using, but we’d like to step things up a little bit. Hence, we now open-sourced an internally developed C++ framework called ‘TraceTool’ and made it available under the LGPL v3 license on our GitHub account:

http://github.com/froglogic/tracetool

TraceTool started out as a project which we did a few years ago but since grew to a full-fledged, efficient and configurable framework which we use ourselves internally in order to aid debugging our software.

TraceTool GUI highlighting trace statements generated by interesting function.

TraceTool GUI highlighting trace statements generated by interesting function.



The birds-eye view is that the source code of the application or library is augmented with additional statements which generate output reflecting the current state of the application. By default, these statements don’t do anything though – the idea is that they are compiled into the application (and shipped in release builds) but only get activated when a special configuration file is detected at runtime. That configuration file describes

  • for which C++ classes, functions or files to enable tracing
  • what format to use for the trace output
  • whether to log stack traces when individual trace points are hit
  • and much more

In addition to the library, the framework provides a GUI as well as various command line tools to view and process the generated traces.

The workflow which inspired this project is that once a user reports a defect of some sort, he gets sent an configuration file to be stored somewhere and that configuration file enables just the right amount of debug output for the developer to understand what’s going on.

We welcome everyone to hop over to the TraceTool repository page, have a look, give it a try – and see whether this might be useful to you. If you feel like it, don’t hesitate to contribute back by submitting a Pull Request so that we can improve things for everyone. :-)

Online documentations of APIs are a fine thing: by their urls they provide some shared universal location to refer to and often also are kept up-to-date to latest improvements. And they serve as index trap for search engine robots, so more people have the chance to find out about some product.

But the world is sometimes not an online heaven:

  • “my internet is gone!”
  • documentation webserver is down
  • documentation webserver is slooo….oo…ow
  • documentation webserver no longer has documentation for that older version

So having also access to offline documentation as alternative can make some developers using your library happy. If you are using Qt in your projects, you yourself might even prefer to study the offline Qt API documentation installed on your local systems, either via the Qt Assistant program or as integrated in IDEs like Qt Creator or KDevelop. Even better, the offline documentation comes together with the library itself (e.g. via packages on Linux or *BSD distributions), so the documentation is always matching the version of the library.

Being a good developer you have already added lots of nice API documentation in your code. Now how to turn this into a separate offline documentation manual for the consumers of your library, without too much effort?
If your project uses CMake for the build system and your target audience is using documentation viewers like Qt Assistant, Qt Creator or KDevelop, read on for a possible solution.

Deploying doxygen & qhelpgenerator

Very possibly you are already using doxygen to generate e.g. an HTML version of the API dox for your library project. And you might know doxygen can output to more formats than HTML. One of them is the format “Qt Compressed Help” (QCH). Which is a single-file format the above mentioned documentation viewers support well. QCH support by doxygen even exists since 2009.

So, with doxygen & qhelpgenerator installed and a respective doxygen config file prepared, the build script of your library just needs to additionally call doxygen as part of another target, and there is the API documentation manual as part of your project build, ready to be deployed and packaged with your library.

A CMake macro for convenience

To not have everyone duplicate the needed build setup code, like checking for the needed tools being installed, and to not have to learn all the details of the doxygen config file, some CMake macro doing the dull work recommends itself to be written and to be distributed for reuse.

Before doing that though, let’s consider another aspect when it comes to documenting libraries: including documentation for code constructs used from external libraries.

Linking between offline API dox manuals

Especially for C++ libraries, where subclassing across classes from different libraries is common, one wants to have direct links into the documentation of external libraries, so the reader has transparent access to any info.

The Qt help system has an abstract addressing system which works independently from the actual location of the QCH files e.g. in the filesystem. These addresses use the scheme qthelp: and a domain namespace for identifying a given manual (e.g. org.qt-project.qtcore). Additionally there is the concept of a virtual folder which is also set as root directory within a manual (e.g. qtcore). So the address of the document for the class QString would be qthelp://org.qt-project.qtcore/qtcore/qstring.html, and the documentation viewer would resolve it to the manuals registered by the matching metadata.

Doxygen supports the creation of such links into external QCH files. For a given manual of an external library, the references are created based on the domain name, the virtual folder name and a so-called tag file for that library. Such a tag file contains all kind of metadata about classes, methods and more that is needed to document imported structures and to create the references.

The QCH files for Qt libraries are coming with such tag files. But what about your own library? What if somebody else should be able to create documentation for their code with links into the documentation of your own library?
Doxygen helps here as well, during the creation of the API dox manual it optionally also creates a respective tag file.

Extending CMake Config files with API dox info

So for generating the references into some external library documentation, the location of its tag file, its name of the domain and its name of the virtual folder need to be known and explicitly added to the doxygen config file. Which is extra work, error-prone due to typos and might miss changes.

But it can be automated. All the information is known to the builder of that external library. And with the builder storing the information in standardized CMake variables with the external library’s CMake config files, it could also be automatically processed into the needed doxygen config entries in the build of your own library.

So a simple find_package(Bar) would bring in all the needed data, and passing Bar as name of the external library should be enough to let the to-be-written CMake macro derive the names of the variables with that data.

So with these theoretic considerations finally some real code as draft:

Patch uploaded for your review and comments

As possible extension of the Extra-CMake-Modules (ECM) the review request https://phabricator.kde.org/D2854 proposes 2 macros for making it simple and easy to add to the build of own libraries the generation of API dox manuals with links into external API dox manuals.

The call of the macro for generating the tag and qch files would e.g. be like this:

ecm_generate_qch(
    Foo
    VERSION "2.1.0"
    ORG_DOMAIN org.kde
    SOURCE_DIRS
        ${CMAKE_SOURCE_DIR}/src
    EXTERN
        Qt5Core
        Bar
    QCH_INSTALL_DESTINATION ${KDE_INSTALL_FULL_DATADIR}/qch
    TAGS_INSTALL_DESTINATION ${KDE_INSTALL_FULL_DATADIR}/qch
)

This would create and install the files ${KDE_INSTALL_FULL_DATADIR}/qch/Foo.qch and ${KDE_INSTALL_FULL_DATADIR}/qch/Foo.tags.

The call for generating the CMake Config file would be:

ecm_generate_package_apidox_file(
    ${CMAKE_CURRENT_BINARY_DIR}/FooConfigApiDox.cmake
    NAMES Foo
)

This generates the file FooConfigApiDox.cmake, whose content is setting the respective variables with the API dox metadata:

set(Foo_APIDOX_TAGSFILE "/usr/share/docs/Foo.tags")
set(Foo_APIDOX_QHP_NAMESPACE "org.kde.Foo")
set(Foo_APIDOX_QHP_NAMESPACE_VERSIONED "org.kde.Foo.210")
set(Foo_APIDOX_QHP_VIRTUALFOLDER "Foo")

This file would then be included in the FooConfig.cmake file

# ...
include("${CMAKE_CURRENT_LIST_DIR}/FooApiDox.cmake")
# ...

and installed together in the same directory.

Tightening and extending integration

In the current draft, for defining what source files doxygen should include for generating the API documentation, the macro ecm_generate_qch() takes simply a list of directories via the SOURCE_DIRS argument list, which then together with some wildcard patterns for typical C and C++ files is written to the doxygen config file.
Though being part of the build system, there might be more specific info available what files should be used, e.g. by using the PUBLIC_HEADER property of a library target. Anyone experienced with the usage of that or similar CMake features are invited to propose how to integrate this into the API of the proposed new macro.

Currently there is no standard installation directory for QCH files. So documentation viewers supporting QCH files need to be manually pointed to any new installed QCH file (exception are the QCH files for Qt, whose location can be queried by qmake -query QT_INSTALL_DOCS).
Instead it would be nice those viewers pick up such files automatically. How could that be achieved?

It would be also good if this or some similar API dox metadata system could be upstreamed into CMake itself, so there is a reliable standard for this, helping everyone with the creation of cross-project integrated API manuals. At least the metadata variable naming patterns would need to get wide recognition to get really useful. Anyone interested in pushing this upstream?

For proper documentation of public and non-public API perhaps the doxygen development could also pick up this old thread about missing proper support for marking classes & methods as internal. Or is there meanwhile a good solution for this?

On the product side, what manual formats beside QCH should be supported?
On the tool side, what other API documentation tools should be supported (perhaps something clang-based)?

Comments welcome here or on the phabricator page.


September 26, 2016

(really short post)
I started to restyling and try to finish the Emoji.

g17843

Hope you like it.
Suggestions?
Thanks

rect18481 rect15697
Illustration of a meeting

With KDE having grown to a large and central Free Software community over the last 20 years, our interactions with other organizations have become increasingly important for us. KDE software is available on several platforms, is shipped by numerous distributions large and small, and KDE has become the go-to Free Software community when it comes to Qt. In addition to those who cooperate with KDE on a technical level, organizations which fight for the same vision as ours are our natural allies as well.

To put these alliances on a more formal level, the KDE e.V. hereby introduces the KDE e.V. Advisory Board as a means to offer a space for communication between organizations which are allied with KDE, from both the corporate and the non-profit worlds.

One of the core goals of the Advisory Board is to provide KDE with insights into the needs of the various organizations that surround us. We are very aware that we need the ability to combine our efforts for greater impact and the only way we can do that is by adopting a more diverse view from outside of our organization on topics that are relevant to us. This will allow all of us to benefit from one another's experience.

"KDE's vision of a world in which everyone has control over their digital life and enjoys freedom and privacy cannot be realized by KDE alone. We need strong allies. I am therefore excited that we are formalizing our relationship with a number of these strong allies with the Advisory Board and what that will bring for them, for KDE, our users and Free Software as a whole." says Lydia Pintscher, President of KDE e.V.

We are proud to already announce the first members of the Advisory Board:

  • From its very beginning Canonical has been a major investor in the Free Software desktop. They work with PC manufacturers such as Dell, HP and Lenovo to ship the best Free Software to millions of desktop users worldwide.

    Canonical will be working with the KDE community to keep making the latest KDE technology available to Ubuntu and Kubuntu users, and expanding that into making Snap packages of KDE frameworks and applications that are easily installable by users of any Linux desktop.

  • SUSE, a pioneer in open source software, provides reliable, interoperable Linux, cloud infrastructure and storage solutions that give enterprises greater control and flexibility. More than 20 years of engineering excellence, exceptional service and an unrivaled partner ecosystem provide the basis for SUSE Linux Enterprise products as well as supporting the openSUSE community which produces the Tumbleweed and Leap community-driven distributions.
    It was natural for SUSE & openSUSE, being long standing joint KDE patrons, to join the KDE Advisory Board. This new channel will foster the already good communications between the KDE community and SUSE/openSUSE, which will bring mutual benefits for all.

  • Following its motto "Liberating Technology", Blue Systems not only produces the two Linux distributions Maui and Netrunner, but also invests directly in several KDE projects, such as Plasma (Desktop and Mobile) or KDE neon. Being part of the Advisory Board further enhances the collaboration between Blue Systems and KDE on all levels

    .

  • Founded in 1998, the Open Source Initiative protects and promotes open source by providing a foundation for community success. It champions open source in society through education, infrastructure and collaboration. The OSI is a California public benefit corporation, with 501(c)(3) tax-exempt status.

    Free and Open Source software projects are developed by diverse and dynamic groups that not only share common goals in the development of technologies, but share common values in the creation of communities. The OSI is joining KDE's advisory board to fulfill their mission of building bridges among different constituencies in the open source community, a critical factor to ensure the ongoing success of both Free and Open Source projects and contributors that support them.

  • FOSS Nigeria is a non-profit organization with the aim of educating Nigerians on KDE and other FOSS applications.

    The first contact between FOSS Nigeria was In 2009 when Adrian de Groot and Jonathan Riddell had a talk about KDE and Qt projects before more than 300 participants during the first Nigerian international conference on free and open source software, and Frederik Gladhorn joined Adrian de Groot in the following year's conference.

    KDE e.V helped in making FOSSng and the KDE Nigeria user group what it is today. Participation of people like Adrian de Groot, Jonathan Riddell and Fredrick Gladhorn plays a vital role for the major breakthrough behind the spread of KDE in Nigeria from 2009 to date.

    Every two months the KDE Nigeria user group meets and discusses the future of KDE in Nigeria. The organisers of FOSS Nigeria 2017 are proposing that 55% of the conference papers will focus on KDE and Qt.

  • The Free Software Foundation Europe's aim is to help people control technology instead of the other way around. KDE is an associated organisation of the FSFE since 2006.

    Since 2009, KDE and the FSFE share rooms for their offices in Berlin, and regularly exchange ideas how to improve the work for software freedom. "It just felt natural and the right thing to do for the FSFE to join the advisory board after we have been asked if we would be interested.", says Matthias Kirschner, President of the FSFE.

  • April is the main French advocacy association which has been promoting and defending Free Software in France and Europe since 1996. April brings together several thousand individual members and a few hundred organisations (businesses, nonprofits, local governments, educational institutions).

    Through the work of its volunteers and permanent staff, April is able to carry out a number of different campaigns to defend the freedoms of computer users. You can join April, or support it, by making a donation.

  • The Document Foundation is the home of LibreOffice, the world's most widely-used free office suite, and the Document Liberation Project, a community of developers focused on providing powerful tools for the conversion of proprietary documents into ODF files. It was created in the belief that the culture born of an independent foundation brings out the best in contributors and will deliver the best software for users.

    "We share with KDE the same commitment to Free Software and open standards, and we have both invested a significant amount of time for the development and the growth of the Open Document Format", says Thorsten Behrens, Director at The Document Foundation. "By joining KDE Advisory Board, we want to underline potential synergies between large free software projects, to grow the ecosystem and improve the quality of end user solutions".

  • In 2003 the city administration of Munich, one of the largest cities in Germany, evaluated the migration to Open-Source desktop software. Within the LiMux project from 2005 until 2013 around 18,000 desktop computers were migrated from Microsoft Windows to a Linux-based desktop with KDE and OpenOffice.org. Remaining Windows-based desktop computers for special use cases and applications are equipped as much as possible with Open-Source applications like Firefox and Thunderbird. In the beginning of the project, KDE 3.x was chosen as the desktop environment for the LiMux client to ensure a smooth transition in handling of graphical user interfaces for the users who were used to the, at that time, established Windows 2000 desktop. Current LiMux client versions ship with KDE Plasma 4.x desktop.

    The desktop environment and its applications play a major role in the user experience of the desktop system. Many of the around 44,000 employees of the city administration use KDE applications for their work on a daily basis. Some of them are more and some are less computer-oriented and therefore the preconfiguration of the desktop is rather conservative and geared towards continuity.

    The city of Munich has often the need to customize the system-wide standard configuration of the desktop environment and its applications. In order to communicate and discuss such issues and with the KDE community and to find solutions, developers from Munich regularly attend Akademy. Our goal is to make, together with the KDE community, the Plasma Desktop suitable for a large enterprise environment.

  • The Free Software Foundation is a 30-year-old nonprofit with a worldwide mission to promote computer user freedom. They defend the rights of all software users. KDE has an important part to play in achieving that mission, because of its excellent work creating a free and user-friendly application environment. KDE's free, powerful, and exceptionally usable software both demonstrates that a future world of everyday computer users using entirely free software is entirely possible, and helps us get there. We're looking forward to closer collaboration and are honored to be part of the advisory board.

If you would like an organization you're with to be part of the KDE e.V. Advisory Board, you can read more about the program here:
https://ev.kde.org/advisoryboard.php

For more information, don't hesitate to send an e-mail to kde-ev-board@kde.org

I have the worst sense of timing when adopting technologies and always find myself at transition points. Python 2 to 3, OpenGL fixed to programmable pipeline, and Qt widgets to Qt Quick. Perhaps the most significant thing to come out of Nokia’s short stewardship of Qt, Qt Quick (originally Qt QUICK, or Qt User Interface Creation Kit) is perhaps the biggest, and somewhat most controversial, change in Qt in recent years. Unless The Qt Company makes a highly unlikely U-turn, it is also probably Qt’s future (without discarding regular widgets, of course). It is also definitely the future for Plasma, the KDE desktop. In fact, it is already its present. Of course, I just had to sink my teeth into it, if only briefly. Since I still wasn’t yet set firmly in the ways of the Widget, I thought it might be easier to wrap my head around this new way of coding. I was both wrong and right. Here is my story.

It’s super easy to get started

Compared to most other GUI toolkits, getting a window up using Qt is dirt simple. With QML, it’s even simpler, for various definitions of “simpler”. To have a blue window (technically a rectangle), all it takes is six lines, disregarding whitespace and code formatting preferences.

import QtQuick 2.5

Rectangle {
    width: 640
    height: 480
    color: "blue"
}

You could theoretically pull off something similar in plain QWidget style with just a bit more code, but there are also magic things happening in the background (property bindings, for one). But equally important is that you don’t even need to compile a single line of C++ code to get this up and running.

qmlscene rectangle.qml

Of course, you will hardly write full, sophisticated software this way and you will eventually write in C++. But Qt Quick takes the drudgery out of getting the ball rolling on a new app or idea. Or to put it in a more succinct manner, Qt Quick is great for quickly prototyping and testing ideas.

It’s easy to create unconventional interfaces

But Qt Quick isn’t simply a prototyping tool. It’s a framework used to write fully functional and, as the marketing would usually say, beautiful apps, for mobile and desktop. But, let’s be honest, primarily for mobile. And the way it was developed and the way it is used, it is indeed easy to make pretty, “unconventional” software with. By “unconventional”, I mostly mean non-desktop software, programs more dependent on touch and gestures, fluid interfaces, non-rectangular controls, and whatnot. Sure, it is possible to do all of those with C++ Qt. In fact, Qt Quick (mostly) resolves to C++ behind the scenes anyway. But possible doesn’t mean easier, nor does it always mean wise.

It’s harder to create conventional interfaces

Ironically, it is, to some extent, harder to make “traditional” programs, a.k.a. conventional desktop programs, using Qt Quick. In order to work fluidly across different screens with different resolutions and sizes, Qt Quick decomposed its base constructs even further. Whereas you had a QPushButton in QWidget-based programs, you needed to roll your own in Qt Quick, with a Rectangle, MouseArea, and other elements to style and animate your custom button. It was only later in Qt Quick’s development that a set of components that mimicked desktop widgets, in behavior and style, would be created, and even then, even until now, it’s in a state of flux, from Quick Components to Quick Controls to Quick Controls 2. Who knows what the future holds.

Still not so easy to create well-designed UI

Another irony is that, while Qt Quick has indeed made it easier to create beautiful software, it has also made it harder as well. Not technically speaking, however. To some extent, Qt Quick has put design front and center in the app development process (some would say, as it should have been from the beginning). While it was easy, even too easy, to write complex UIs to the point of being convoluted using QWidgets, not only is it somewhat harder in Qt Quick due to the previous point, but the framework also magnifies insufficient, even ugly, UI designs. To put it simply, while you can get away with the Rectangle + MouseArea equivalent of a QPushButton in your prototype, your final product better not end up using only that. Unless that was actually the goal in the first place.

Versioning is confusing

My pet peeve with Qt Quick is that module version numbers are quite confusing. There is almost no discernible pattern to them, except for the main QtQuick module itself. For example, from my very cursory investigation, Qt 5.5 shipped with QtQuick 2.5, QtQuick.Controls 1.4, QtQuick.Window 2.0, and QtQuick.Layouts 1.2, just to name a few. If you only go by or target the very latest Qt release, there will be no problem as the excellent docs provides those numbers for you. But if you’re specifically targeting a minimum Qt version (like Qt 5.5, for example), especially on systems that may not have anything more recent, that might involve a bit of guesswork on your part.

Paradigm shift: Imperative to Declarative

The biggest “issue” with Qt Quick is that it represents what is practically a paradigm shift in terms of software development, particularly with regards to QML, the Qt Meta Language, used to write Qt Quick programs. Qt QUICK was designed as a way to help better integrate designers into the development process, by offering them an easy to use and familiar (at least those familiar with web technologies like CSS and Javascript) tools to write interfaces on top of the “plumbing” written by software developers. In an ideal world, designers write in QML and developers write in C++. But the world is never ideal and, even in those cases, developers would still need a working knowledge of Qt Quick and QML to tie things together.

Most developers, however, are familiar with iterative type languages like C, C++, Javascript, Java, etc. These languages describe how things happen, and in a set order. QML, in contrast, is declarative and describes what things look like. With a few exceptions like nesting, component creation, and the like, things also don’t happen in order. Or at least you’re advised not to rely on the order of creation (much like in Qt you’re advised not to rely on the order of signals-slots). This could be a bit weird for those of us still trying to come to terms with regular, imperative languages, though more seasoned programmers seem to have no problems learning what is to them simply a new tool of the trade.

Those used to using C++ in Qt might also get tripped up by the switch to Javascript. Despite its seeming likeness to CSS, QML is pretty much based on one of the most used yet also most criticized languages on the Web, even of all time. That said, the rule of thumb is to keep the Javascript to a minimum. Although it’ll probably be impossible to escape writing a good amount of Javascript code in more complicated programs, it won’t require you to be a JS guru either.c.

Worth the trouble?

In my not so humble opinion, Qt Quick won’t be interesting for you if:

  • You have absolutely zero, even negative interest in “fluid”, animated, custom-looking interfaces, like those for mobiles
  • Your program a dependency on a custom widget that can’t be easily ported to the Qt Quick scene graph (though QQuickPaintedItem is an interesting workaround)
  • You have an epic amount of GUI code that can’t easily be ported to Qt Quick
  • Your program has a complicated relationship with OpenGL, as Qt Quick is currently intricately tied with OpenGL, though that’s changing soon apparently
  • You prefer to have a stable and consistent set of controls/widgets that look and behave the same on desktops, never mind mobile (sets of components, like those from Kirigami, is one solution)

In most other cases, Qt Quick does seem to be the future, though definitely not the only one. If anything, Qt Quick somewhat enforces a way of architecturing programs that splits responsibilities between presentation layer and business logic. For mobile app development with Qt, it’s definitely the only way to go. And for extending the Plasma workspace, it also seems to be only route as well.

Moving forward

Although started back in 2010, I can’t help shake the feeling that Qt Quick is still as a fast-moving, always-changing target, in contrast to the rock solid stability and immutability of Qt Widgets. Of course, those widgets have been there for decades now, so it’s not really a fair comparison. The situation with Qt Quick Components/Controls is particularly head-scracthing. Who knows when v3 will come out (probably in a year or two now that v2 is “stable”). On the one hand, coming from an also ever-changing open source world, that’s not exactly a bad thing. On the other hand, you do expect some level of stability from a framework.

As for me personally, I’ve barely scratched the surface of what Qt Quick can do, but I’ve particularly been enamored by its potential as a prototyping tool. Sadly, it doesn’t seem to have been explored that much either. But that’s a (long) blog post for another day.

Following our last week’s monthly Café, we decided to concentrate on advanced trimming features and if possible an audio mixer for the next Kdenlive 16.12 release. It was also mentionned that several users requested the comeback of the rotoscoping effect, which was lost in the KF5 port, preventing some users to upgrade their Kdenlive version.

 

So good news, I worked on it and rotoscoping is now back in git master, will be in the 16.12 release.

On the stability side, we just fixed a packaging issue on our PPA that caused frequent crashes, so if you experienced issues with our PPA, please update to enjoy a stable Kdenlive.

Next Café will be on the 19th of October, at 9pm (Central European Time). See you there!

Jean-Baptiste Mardelle

September 24, 2016

I started a tiny project a couple of days ago: arch-audit.

arch-audit main (and unique) goal is to display the Arch Linux packages that are affected by known vulnerabilities on your system.

To do that, arch-audit parses the CVE page on the Arch wiki, which is maintained by the Arch CVE Monitoring Team.

arch-audit output is very verbose when it’s started without any argument, but two options --quiet (or -q or -qq) and --format (or -f) allows to change the output for your use case. There’s also a third option --upgradable to display only packages that have already been fixed in the Arch Linux repositories.

In my opinion a great use case is the following:

$ ssh www.andreascarpino.it
openssl>=1.0.2.i-1
lib32-openssl>=1:1.0.2.i-1
Last login: Sat Sep 24 23:13:56 2016
$

In fact, I added a systemd timer that executes arch-audit -uq everyday and saves its output to a temporary file that is configured as banner for SSH. Then, everytime I log into my server, I get notified about packages that have vulnerabilities, but that already have been fixed. Time to do a system update!

So, now I’m waiting your feedbacks! Have fun!

BTW, Lynis already added arch-audit support!

It has been about two weeks since I last wrote about AtCore. In that time Tomaz has commited a few changes that allow us to use a plugin system so that AtCore can specialize in speaking your firmware!? Or something like that. Really what It means is that we should be able to keep most all of the common stuff within the AtCore object and use the plugins to give us firmware specific stuff when needed.

Did someone say Test?.

The test is pretty stright foward and that is to evaulate how well AtCore is doing at sending print commands to the printer to do this really we just need to be able to print an object that takes some amout of time.

Late nights printing tiny pyramids

I’ve been awake all night watching this thing print is working as expected. First I had to find a decent model to print and I came across this cool Fractal pyramid

038b2aff02be59222448f0799d856868_preview_featuredModel by ricktu

 

 

Mandatory hardware pictures ?

Two computers.. one to record and play host while the other keeps my sanity durring the nights print.It would be along print the slicer estimates the print to take around 5 hours. Ok time for the mandatory hardware pictures of stuff Check out the computer that will be hosting the printer and recording video for later.In this Set up we will have

img_20160924_061356chairs eyeview

Two cameras extruder cam and an overview cam. Sadly my 3rd camera didn’t want to play along it was

img_20160923_193812time for a new floor?

just unable to get a decent focus on the printer LCD so we will have to go without this time.

 

 Watch the Timelapse video ….7 hours later…..

After 7 hours of printing we have it completed. The Best part i have saved for last when printing

img_20160924_061312triangles……

 

i have used glow in the dark filiament. To quickly charge up the glowing i have placed

 

img_20160924_054439Glow

the model between the lights of some very bright led lamps. Unfortunately the Camera didn’t do such a great job picking up the detail with the model glowing in the dark

 

 

So how did we do?

Well the print was a success! The RAM usage was a bit high for my liking but is most likey due to our text log. We will do further tests to check that. The Firmware Plugin for Repitier seams to be printing stabily for any lenght print I would call that a success!

 

 


Dear digiKam fans and users,

After a second release 5.1.0 published one month ago, the digiKam team is proud to announce the new release 5.2.0 of digiKam Software Collection. This version introduces a new bugs triage and some fixes following new feedback from end-users.

read more

September 23, 2016

Today’s weekly mobile IMG update brings,

  • Updated packages
  • New appstream release
  • Kirigami based new discover design

You can flash using instructions at https://plasma-mobile.org/nexus-5/

September 22, 2016

This announcement is also available in Italian and Taiwanese Mandarin.

The latest updates for KDE's Plasma, Applications and Frameworks series are now available to all Chakra users.

The Plasma 5.7.5 release is the final bugfix update for the 5.7.x series, as 5.8.0 will be released soon. It includes a month's worth of bugfixes and new translations, with notable changes found in the plasma workspace, sddm-kcm and networkmanager packages.

Applications 16.08.1 include more than 45 recorded bugfixes and improvements to kdepim, kate, kdenlive, konsole, marble, kajongg, kopete, umbrello, among others.

Frameworks 5.26.0 include bugfixes and improvements to breeze icons, plasma framework, kio, ktexteditor and sonnet.

Other notable package upgrades:

[core]

  • cups 2.1.4
  • git 2.10.0
  • mariadb 10.1.17 providing the security fix of MySQL 0-day "CVE-2016-6662"
  • php 5.6.25
  • poppler 0.47.0 - this breaks backward compatibility, if you have any local/CCR packages depending on it you will have to recompile them

    [desktop]
  • pdf2svg 0.2.3

    [lib32]
  • wine 1.9.19

    [gtk]
  • amule 10958

    It should be safe to answer yes to any replacement question by Pacman. If in doubt or if you face another issue in relation to this update, please ask or report it on the related forum section.

    Most of our mirrors take 12-24h to synchronize, after which it should be safe to upgrade. To be sure, please use the mirror status page to check that your mirror synchronized with our main server after this announcement.

  • Older blog entries


    Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.