February 24, 2017

Today the Kubuntu team is happy to announce that Kubuntu Zesty Zapus (17.04) Beta 1 is released . With this Beta 1 pre-release, you can see and test what we are preparing for 17.04, which we will be releasing in April.

NOTE: This is Beta 1 Release. Kubuntu Beta Releases are NOT recommended for:

* Regular users who are not aware of pre-release issues
* Anyone who needs a stable system
* Anyone uncomfortable running a possibly frequently broken system
* Anyone in a production environment with data or work-flows that need to be reliable

Getting Kubuntu 17.04 Beta 1:
* Upgrade from 16.10: run `do-release-upgrade -d` from a command line.
* Download a bootable image (ISO) and put it onto a DVD or USB Drive : http://cdimage.ubuntu.com/kubuntu/releases/zesty/beta-1/

Release notes: https://wiki.ubuntu.com/ZestyZapus/Beta1/Kubuntu

February 23, 2017

Cuando crees que el control de los elementos de Plasma no puede ser más completa, la Comunidad KDE te sorprende. Hoy os quiero presentar Sticky Window Snapping, un script de Kwin que mejora ostensiblemente el comportamiento de las ventanas en Plasma 5 y que funciona a la perfección.

Sticky Window Snapping, mejorando el comportamiento de las ventanas en Plasma 5

De la mano de Flupp nos llega un script para Kwin que dota de nuevas funcionalidades a las ventanas de Plasma 5. Se trata de Sticky Windows Snapping con el que las ventanas adquieren propiedades pegajosas entre ellas, con lo que su manejo mejora considerablemente.

Su funcionamiento básico es el siguiente: al acercar los bordes de dos ventanas éstas quedan imantadas, de forma que una vez unidas pueden redimensionar una respecto a la otra. Además, estos efectos se pueden activar o desactivar utilizando atajos de teclado.

 

Por cierto, si os ha gustado y lo utilizáis no estaría de más agradecer el trabajo a Kuser con comentarios, votos positivos y compartiéndolo, ya que como comenté en una entrada en el blog: Un simple gracias también ayuda.

Más información: KDE Store – KWin Scripts

Cómo instalar Sticky Widows Snapping

La instalación de este Script es muy sencilla ya que se realiza directamente desde las Preferencias del Sistema. Los pasos exactos son los siguientes:

  • Iniciamos el Lanzador de aplicaciones o Kickoff
  • Clicamos en la Preferencias del sistema.
  • Ahora pinchamos en Gestión de Ventanas.
  • Nos dirigimos a la sección de Guiones de KWin.
  • Ahora pinchamos en Obtener nuevos guiones.
  • Buscamos Sticky Windows Snapping y pinchamos en Instalar.
  • Ahora activamos el guión y pulsmos en Aplicar.

Sencillo y, a partir de ahora, imprescindible en todos mis escritorios Plasma de mis ordenadores.

 

 

Some time ago I posted a blog post about how I packed telegram desktop client for flatpak. I’ve been updating it since then in some reasonable intervals as I don’t have time to update it more often and mostly because the telegram client’s build system breaks my build quite oftenly. Recently I discovered that someone managed to patch telegram to use system Qt libraries instead of building own patched Qt and building linking it statically. After some time I managed to adjust those patches and make them work with my build which allows me to use Qt from KDE runtimes. Here are new instructions how to get this work:

Add KDE runtimes repository and install runtimes
$ flatpak remote-add kde --from https://distribute.kde.org/kderuntime.flatpakrepo
$ flatpak install kde org.kde.Platform

And then you can install and run the telegram desktop client:
$ wget https://jgrulich.fedorapeople.org/telegram/keys/telegram.gpg
$ flatpak remote-add --gpg-import=telegram.gpg telegram-desktop https://jgrulich.fedorapeople.org/telegram/repo/
$ flatpak install telegram-desktop org.telegram.TelegramDesktopDevel
$ flatpak run org.telegram.TelegramDesktopDevel

Or install it from bundle
$ wget https://jgrulich.fedorapeople.org/telegram/telegram.flatpak
$ flatpak install --bundle telegram.flatpak

The reason I did the hard work to build it with Qt from KDE runtimes is that now you can use telegram with portals support if you run it with “-platform flatpak” parameter. Unfortunately this only makes openURI portal to work as telegram has some internal hacks or whatever to use gtk filedialog so for that reason I still allow to access user’s home directory. There is also a bug if you use telegram under KDE where it tries to use QSystemTrayIcon instead of libappindicator and unfortunately telegram’s system tray icon (the one using QSystemTrayIcon) works only with Qt 5.6.2 and in KDE runtimes we have Qt 5.7.1. The system tray icon is visible, but its context menu doesn’t work so if you want to have fully working system tray icon you have to use “–env=XDG_CURRENT_DESKTOP=gnome” flatpak parameter to force it to use libappindicator.

And that’s it. Sorry you had telegram broken for couple of days while I was fighting with it, but hopefully it will work perfectly now.

I am happy to inform you that Qt 5.9 Alpha has been released today.

Qt 5.9 Alpha is an important milestone on our way to the final Qt 5.9.0 release, which is targeted to be released by the end of May 2017.

The Alpha release is available only as source packages. Binary installers will be available via the online installer in conjunction with the Beta release as well as development snapshots during the coming weeks.

To learn about the features in Qt 5.9, please read the Qt 5.9 new features page. For more detailed overview of some Qt 5.9 key features, check the Qt Roadmap for 2017 blog post.

If you want to try Qt 5.9, please download the Qt 5.9 Alpha source packages from your Qt Account or from download.qt.io.

Please remember to give us feedback by writing to the mailing lists and reporting bugs.

The post Qt 5.9 Alpha Released appeared first on Qt Blog.

February 22, 2017

Yesterday during our team meeting Eike told me that I’m a mobile C++ conference nowadays. While it sounds funny, it is true that I’ve been a bit more active than usual.

C++ in Moscow

Now I’m in the beautiful Moscow thanks to Sergey Platonov who invited me to speak at C++ Russia again this year. While I haven’t been pining for the winter in Russia, I did miss the city.


Read more...

Ayer fue de esos días que tengo marcados en mi calendario personal y es que tenía emisión en directo con mis compañeros virtuales donde Kirigami fue el protagonista del decimosexto podcast de KDE España. En esta ocasión se trató de un tema técnico pero que despertó en mi una luz que hasta el momento no había visto: la convergencia de los proyectos de la Comunidad KDE.

Kirigami fue el protagonista del decimosexto podcast de KDE España

El segundo podcast de vídeo de la tercera temporada de KDE España titulado “Kirigami, la interfaz KDE para la creación de aplicaciones móviles ” se grabó ayer utilizando los servicios de Google sin ningún problema técnico destacable.

Kirigami fue el protagonista del decimosexto podcast de KDE España

Los participantes del decimosexto vídeo podcast fueron:

  • Ruben Gómez Antolí, miembro de KDE España y que siguió realizando las labores de presentador.
  • Aleix Pol (@AleixPol), ex-presidente de KDE España y vicepresidente de KDE e.V., que puso el punto de vista del desarrollador de KDE que ha participado en el proyecto.
  • Baltasar Ortega (@baltolkien), secretario de KDE España y creador y editor del presente blog, qué puso el punto de vista del usuario.

A lo largo de casi la hora y veinte minutos que duró el vídeo podcast se habló tanto del inicio del proyecto Kirigami, su evolución, su estado actual y su futuro. Además, tuvimos la suerte de conocer novedades jugosas de Plasma Mobile.

Personalmente me encantó el podcast, aprendí mucho y me quedo con unas cuantas ideas que me volvieron un poco loco:

  • Gracias a Kirigami, el código de las aplicaciones es realmente el mismo tanto para una pantalla grande de escritorio como para una pequeña de un móvil. Es decir, el desarrollador solo hace una vez la aplicación, Kirigami se encarga que ésta se adapte a la pantalla y a la usabilidad específica.
  • Plasma Mobile ha cambiado de base para su funcionamiento, de Ubuntu Phone a Android.
  • Todas las tecnologías de la Comunidad KDE convergen para universalizar las aplicaciones KDE para todo tipo de dispositivos y sistemas operativos.
  • Que básicamente hace falta alguien que coja las riendas para coger estas tecnologías y encajarlas para poder tener la potencia de GNU/Linux en multitud de dispositivos. Poco a poco nos vamos acercando a ese momento tan esperado.

Espero que os haya gustado, si es así ya sabéis: “Manita arriba“, compartid y no olvidéis visitar y suscribiros al canal de Youtube de KDE España.

Como siempre, esperamos vuestros comentarios que os aseguro que son muy valiosos para los desarrolladores, aunque sean críticas constructivas (las otras nunca son buenas para nadie). Así mismo, también nos gustaría saber los temas sobre los que gustaría que hablásemos en los próximos podcast.

Aprovecho la ocasión para invitaros a suscribiros al canal de Ivoox de los podcast de KDE España que pronto estará al día.

WebGL Streaming is optimized for Qt Quick and allows you to run remote Qt Quick applications in a browser.

I’m working on a platform plugin to run remote applications in your browser, using OpenGL command streaming.

When the remote application runs using this new platform plugin, the application will create a lightweight web server. When a user connects to the application, a web socket connection is opened between the server and the client, using QWebSocketServer.
The application running on a remote computer will serialize all GL calls to binary data and sends it using the web socket connection.

The local browser will send the event (mouse, touch or keyboard) to the remote application, so user interaction is allowed. Even multi-touch support! (up to 6 fingers):

And some Qt Quick demos running in the browser:

The “calqlatr” example:

“clocks” example:

“emitters” (particles) example:

“samegame” example:

Desktop browsers are also supported:
webgldemo

It allows multiple connections to the same application.

screenshot_20170222_183836

New windows are shown in a different canvas in the browser.screenshot_20170222_184419

To improve the performance, I’m also working to support web sockets compression. To be able to use the permessage-deflate extension. It’s actually working but needs more testing.

This feature is going to be introduced in Qt 5.10 as appears in the 2017 roadmap.

The post Qt Quick WebGL Streaming appeared first on Qt Blog.

KDE had 4 talks at this year's FOSDEM conference. Here's the recordings.

From Gtk to Qt: A Strange Journey, part 2

The continuation of the original talk from Dirk Hohndel and Linus Torvalds about the port of Subsurface from Gtk to Qt, now with mobile in mind.

Kube

The next generation communication and collaboration client

Bundling KDE

Where does KDE land in the Snap and Flatpak world?

KDE SlimBook Q&A

February 21, 2017

One of the interesting things about working with Qt is seeing all the unexpected ways our users use the APIs we create.  Last year I got a bug report requesting an API to set a custom frame rate for QML animations when using QQuickRenderControl.  The reason was that the user was using QQuickRenderControl as an engine to render video output from Qt Quick, and if your target was say 24 frames per second, the animations were not smooth because of how the default animation driver behaves.  So inspired by this use case I decided to take a stab at creating such an example myself.

screen-shot-2017-02-21-at-12-46-27

This may not be the most professional looking User Interface, but what it does is still pretty neat.  The objective is to feed it an animated QML scene and it should output an image file for each frame of the animation.  These images can then be converted into a video or animated image using an external tool.  The challenge is that Qt Quick is a UI tool, not a movie generator.

The naive approach to this would be to create a QQuickWindow, set the window size to the output target size, and then grab the frame by calling QQuickWindow::grabWindow() each time the frameSwapped() signal is emitted.  There are a couple of issues with this approach though.  First is that the video would need to render in realtime.  If you wanted to render an animation that was 5 minutes long, it would take 5 minutes because it would just be like recording your application for 5 minutes.  The second issue is that under the best case scenario you would be rendering video at the refresh rate of your monitor. This would even require a reasonably powerful machine, because the QQuickWindow::grabWindow() call involves a glReadPixels call which is quite expensive.  It is also problematic if you need to render at a different frame rate than your monitor refresh (which is what the user that inspired me was complaining about).  So here is how I addressed both of these issues.

QQuickRenderControl

QQuickRenderControl is a magical class that lets you do all kinds of crazy things with Qt Quick content.  For our purposes we will use it to render Qt Quick content to an offscreen surface as fast as we can.  Rather than creating an on-screen QQuickWindow, we can create a dummy QQuickWindow and via render control we can render content to an QOpenGLFramebufferObject instead.

    // Setup Format
    QSurfaceFormat format;
    format.setDepthBufferSize(16);
    format.setStencilBufferSize(8);

    // Setup OpenGL Context
    m_context = new QOpenGLContext;
    m_context->setFormat(format);
    m_context->create();

    // Setup dummy Surface (to create FBO with)
    m_offscreenSurface = new QOffscreenSurface;
    m_offscreenSurface->setFormat(m_context->format());
    m_offscreenSurface->create();

    // Setup Render Control and dummy window 
    m_renderControl = new QQuickRenderControl(this);
    m_quickWindow = new QQuickWindow(m_renderControl);

    // Setup QML Engine
    m_qmlEngine = new QQmlEngine;
    if (!m_qmlEngine->incubationController())
        m_qmlEngine->setIncubationController(m_quickWindow->incubationController());

    // Finish it all off
    m_context->makeCurrent(m_offscreenSurface);
    m_renderControl->initialize(m_context);

The above gets QQuickRenderControl setup, then when the size is know and you can actually create the QOpenGLFramebuffer object and tell the dummy QQuickWindow thats where it will be rendering.

void MovieRenderer::createFbo()
{
    m_fbo = new QOpenGLFramebufferObject(m_size * m_dpr, QOpenGLFramebufferObject::CombinedDepthStencil);
    m_quickWindow->setRenderTarget(m_fbo);
}

And once that is done it’s just a matter of loading up the QML content and rendering it.  Unlike with QQuickWindow, QQuickRenderControl allows you to control when the steps of the rendering process occurs.  In our case we want to render as fast as possible so this is what our rendering setup looks like:

void MovieRenderer::renderNext()
{

    // Polish, synchronize and render the next frame (into our fbo).
    m_renderControl->polishItems();
    m_renderControl->sync();
    m_renderControl->render();
    m_context->functions()->glFlush();

    m_currentFrame++;
 
    // Grab the contents of the FBO here ...

    if (m_currentFrame < m_frames) { 
        // Schedule the next update 
        QEvent *updateRequest = new QEvent(QEvent::UpdateRequest); 
        QCoreApplication::postEvent(this, updateRequest);
    } else { 
        //Finished cleanup();
    } 
} 
bool MovieRenderer::event(QEvent *event) { 
    if (event->type() == QEvent::UpdateRequest) {
        renderNext();
        return true;
    }
    return QObject::event(event);
}

The above sets up an event driven loop that will render as fast as possible while still handling events between frames, which is needed for progressing animations with Qt Quick.

Custom QAnimationDriver

The second issue we need to address is that the animation behavior is wrong.  To remedy this we need a custom QAnimationDriver that enables us to advance animations at our own frame rate.  The default behavior is to try and advance the animation’s in steps as close as possible to the refresh rate of the monitor your application is running on.  Since we never present the content we render to the screen that behavior doesn’t make sense for us.  Instead we can install our own QAnimationDriver which can be manually advanced each frame we generate based on a pre-determined frame rate.  Here is the whole implementation of my custom Animation driver:

class AnimationDriver : public QAnimationDriver
{
public:
    AnimationDriver(int msPerStep)
        : m_step(msPerStep)
        , m_elapsed(0)
    {}

    void advance() override
    {
        m_elapsed += m_step;
        advanceAnimation();
    }
    qint64 elapsed() const override
    {
        return m_elapsed;
    }
private:
    int m_step;
    qint64 m_elapsed;
};

Now to use this you just need to install the new QAnimationDriver.  When you call QAnimationDriver::install() it will replace the current one, so Qt Quick will then behave like we need it to.  When we start the movie renderer we also install the custom AnimationDriver:

    m_frames = m_duration / 1000 * m_fps;
    m_animationDriver = new AnimationDriver(1000 / m_fps);
    m_animationDriver->install();

    // Start the renderer
    renderNext();

And finally since we control the render loop, we need to manually advance the animation driver.  So before the end of the renderNext() method make sure to call:

m_animationDriver->advance();

And that is it.  Now we can render as fast as possible, and our animation engine will step perfectly for the frame rate we are generate frames for.  It is important to remember that you must process events after calling advance() on your animations though, because these are handled through the Qt Event and Signal and Slots system.  If you don’t do this, then you will generate the same frame many times.

Results

Once you run the MovieRenderer you end up with a folder full of images representing each frame.  To prepare video files from the generated output I used ffmpeg:

ffmpeg -r 24 -f image2 -s 1280x720 -i output_%d.jpg -vcodec libx264 -crf 25 -pix_fmt yuv420p hello_world_24.mp4

In the above command it will generate a 720p video at 24 fps from a series of files called output_*.jpg.  It would also be possible to create an example that either called this tool for you via QProcess, or even included an encoder library to generate the video directly.  I went for the simplest approach using only what Qt had built-in for this example.  Here are a few example movies I generated:

This first video is rendered at 60 FPS and the second is at 24 FPS.  Notice how they animate at the same speed but one is smoother than the other.  This is the intended behavior in action.

Well thats all I have to show, the rest is up to you.  I’ve published the code for the QML Movie Renderer here so go check it out now!  I hope this example inspires you as well to make other cool projects, and I look forward to seeing what new unexpected ways you’ll be using Qt in the future.

The post Making Movies with QML appeared first on Qt Blog.

This is the first in a series of blog posts on QStringView, the std::u16string_view equivalent for Qt. You can read about QStringView in my original post to the Qt development mailing-list, follow its status by tracking the “qstringview” topic on Gerrit and learn about string views in general in Marshall Clow’s CppCon 2015 talk, aptly […]

The post QStringView Diaries: Advances in QStringLiteral appeared first on KDAB.

…why not!

Shortly before FOSDEM, Aleix Pol asked if I had ever put Plasma in a Snap. While I was a bit perplexed by the notion itself, I also found this a rather interesting idea.

So, the past couple of weeks I spent a bit of time here and there on trying to see if it is possible.

img_20170220_154814

It is!

But let’s start in the beginning. Snap is one of the Linux bundle formats that are currently very much en-vogue. Basically, whatever is necessary to run an application is put into a self-contained archive from which the application then gets run. The motivation is to isolate application building and delivery from the operating system building and delivery. Or in short, you do not depend on your Linux distribution to provide a package, as long as the distribution can run the middleware for the specific bundle format you can get a bundle from the source author and it will run. As an added bonus these bundles usually also get confined. That means that whatever is inside can’t access system files or other programs unless permission for this was given in some form or fashion.

Putting Plasma, KDE’s award-winning desktop workspace, in a snap is interesting for all the same reasons it is interesting for applications. Distributing binary builds would be less of a hassle, testing is more accessible and confinement in various ways can lessen the impact of security issues in the confined software.

With the snap format specifically Plasma has two challenges:

  1. The snapped software is mounted in a changing path that is different from the installation directory.
  2. Confining Plasma is a bit tricky because of how many actors are involved in a Plasma session and some of them needing far-reaching access to system services.

As it turns out problem 1, in particular, is biting Plasma fairly hard. Not exactly a great surprise, after all, relocating (i.e. changing paths of) an installed Plasma isn’t exactly something we’ve done in the past. In fact, it goes further than that as ultimately Plasma’s dependencies need to be relocatable as well, which for example Xwayland is not.

But let’s talk about the snapping itself first. For the purposes of this proof of concept, I simply recycled KDE neon‘s deb builds. Snapcraft, the build tool for snaps, has built-in support for installing debs into a snap, so that is a great timesaver to get things off the ground as it were. Additionally, I used the Plasma Wayland stack instead of the X11 stack. Confinement makes lots more sense with Wayland compared to X11.

Relocatability

Relocatability is a tricky topic. A lot of times one compiles fixed paths into the binary because it is easy to do and it is somewhat secure. Notably, depending on the specific environment at the time of invocation one could be tricked into executing a malicious binary in $PATH instead of the desired one. Explicitly specifying the path is a well-understood safeguard against this sort of problem. Unfortunately, it also means that you cannot move your installed tree anywhere but where it was installed. The relocatable and safe solution is slightly more involved in terms of code as you need to resolve what you want to invoke relative from your location, it being more code and also not exactly trivial to get right is why often times one opts to simply hard-compile paths. This is a problem in terms of packing things into a relocatable snap though. I had to apply a whole bunch of hacks to either resolve binaries from PATH or resolve their location relative. None of these are particularly useful patches but here ya go.

Session

Once all relocatability issues were out of the way I finally had an actual Plasma session. Weeeh!

Confinement

Confining Plasma as a whole is fairly straightforward, albeit a bit of a drag since it’s basically a matter of figuring out what is or isn’t required to make things fly. A lot of logouts and logins is what it takes. Fortunately, snaps have a built-in mechanism to expose DBus session services offered by them. A full blown Plasma session has an enormous amount of services it offers on DBus, from the general purpose notification service to the special interest Plasma Activity service. Being able to expose them efficiently is a great help in tweaking confinement.

Not everything is about DBus though! Sometimes a snap needs to talk with a system service, and obviously, a workspace as powerful as Plasma would need to talk to a bunch of them. Doing advanced access control needs to be done in snapd (the thing that manages installed snaps). Snapd’s interfaces control what is and is not allowed for a snap. To get Plasma to start and work with confinement a bunch of holes need to be poked in the confinement that are outside the scope of existing interface. KWin, in particular, is taking the role of a fairly central service in the Plasma Wayland world, so it needs far-reaching access so it can do its job. Unfortunately, interfaces currently can only be built with snapd’s source tree itself. I made an example interface which covers most of the relevant core services but unless you build a snapd this won’t be particularly easy to try ��

Summary

All in all, Plasma is easily bundled up once one gets relocatability problems out of the way. And thanks to the confinement control snap and snapd offer, it is also perfectly possible to restrict the workspace through confinement.

I did not at all touch on integration issues however. Running the workspace from a confined bundle is all nice and dandy but not very useful since Plasma won’t have any applications it can launch as they either live on the system or in other snaps. A confined Plasma would know about neither right now.

There is also the lingering question of whether confining like this makes sense at all. Putting all of Plasma into the same snap means this one snap will need lots of permissions and interaction with the host system. At the same time it also means that keeping confinement profiles up to date would be a continuous feat as there are so many things offered and used by this one snap.

One day perhaps we’ll see this in production quality. Certainly not today ��

mascot_konqi-app-dev

Con motivo de los 20 años de KDE la Comunidad de KDE España ha decidido mostrar su lado más humano y mostrar un poco el lado más personal de sus integrantes. Por ello muchos de ellos han participado en la serie de entrada que he titulado “20 entrevistas para 20 años de KDE” en las que no solo hablarán de su relación con el proyecto sino que también nos mostrarán su lado más personal. Para la décimosegunda entrevista nos encontramos con uno de los miembros de KDE España con más experiencia organizando eventos y de los comprometidos por todo tipo de proyectos sociales que conozco. Con todos ustedes Dani Gutiérrez.

20 entrevistas para 20 años de KDE (XII): Dani Gutiérrez

20 entrevistas para 20 años de KDE (XII): Dani Gutiérrez

Dani Gutiérrez una de las personas más éticas que conozco y un crack organizando eventos.

Hola, en primer lugar quisiera agradecerte tu tiempo. Seguro que tus palabras serán fuente de inspiración para otros muchos. Para empezar me gustaría que te presentaras tu mismo

Dani Gutiérrez Porset, ingeniero de Telecomunicación. Profesor de la universidad UPV/EHU, y CEO de la empresa de software libre “Freedom for Knowledge and Technologies”

Kubuntu, una de las distribuciones más famosas entre los usuarios.

¿Recuerdas como conociste KDE?

Creo que fue en la 3 y pico, estuve probando distintos escritorios y fue el que más me gustó. Siempre he estado con Kubuntu.

¿Qué es lo que te gustó de KDE? ¿Y lo que menos?

Que es un escritorio muy usable. No hubo nada concreto que me disgustara en especial. Quizás a día de hoy me gustaría que las aplicaciones de KDE se portasen a Qt puro, para que la gente con Windows o Mac OSX también las pueda usar.

¿Por qué decidiste empezar a colaborar KDE?

Me pareció un proyecto con calidad.

¿Cual es tu motivación para hacerlo y qué te aportó o te aporta?
Es bueno colaborar con proyectos de confianza.

El momento actual

En la actualidad, ¿Colaboras con el Software Libre en general y KDE en particular?

Con KDE prácticamente no colaboro, aunque si me lo preguntan lo recomiendo; a mis clientes que quieren GNU/Linux es lo que les instalo.
En cuanto al software libre en general, soy miembro del Consejo de Administración de la empresa de informática del ayuntamiento de Bilbao, y desde ahí estoy tratando de hacer cosas. De entrada hay bastante inercia y poca disposición a hacer grandes cambios hacia el software libre, pero hay que seguir trabajando en esa dirección porque es la opción públicamente más útil para toda la sociedad de hoy y sobre todo de las generaciones futuras.

 

¿Qué aplicaciones o partes de KDE has colaborado?

Aunque me habría gustado aportar algo en esto, no he colaborado como desarrollador. Sí fui promotor de un Akademy-es y de un Akademy, ambos en Bilbao.  [Nota del entrevistador: Y ambos fueron grandiosos]
Mañana empieza Akademy-es 2013 de Bilbao
Me has convencido, me gustaría echar una mano ¿cómo crees que es la mejor manera de empezar?
Usa KDE e informa de los errores y mejoras que consideres, conociendo para ello los canales para comunicarlos.

La Comunidad KDE

Define KDE en 3 palabras:

Comunidad, escritorio, aplicaciones

Siempre se dice que KDE es una Comunidad ¿Tú qué opinas?

Un software libre implica que detrás hay gente que lo ha desarrollado, documentado, impulsado, etc.
KDE no es sólo el software sino también todas las personas que a lo largo del planeta están ahí.
20 formas de colaborar con KDE

Te gusta asistir a los eventos ¿por qué? Cuéntanos alguna anécdota.

Siempre es interesante encontrarse en persona y escuchar a la gente.
Cuando organizamos el Akademy 2013 en Bilbao nos costó bastante encontrar un sitio para que el board y otro grupo cenasen debido a que nuestra costumbre es hacerlo no antes de las 21:00, y se quería que fuera hacia las 18:30.
Y para finalizar, me gustaría que nos diéses tu visión de la Comunidad KDE en un futuro.
Tiene que extenderse en más países, en más ciudades y pueblos.

Preguntas rápidas:

¿KDE 4 o Plasma 5?
Plasma 5
 Aplicación que más te gusta de KDE
Dolphin

Aplicación que te gustaría tener nativa en KDE

Band in a Box

La persona

Un libro: Senderos de libertad, sobre Chico Mendes, la amazonía y la lucha por la justicia. Un “clásico” que siempre estará ahí para volver a ser leído.
Una película: Truman (la de Ricardo Darín y Javier Cámara. No confundir con la de El show de Truman)

Un artista musical:  Caetano Veloso

Tu canción favorita: Causas y Azares, de Silvio Rodríguez


Cuando estoy solo me gusta comer:   Ensalada mixta o Pulpo

Ahora tu minuto de oro, cuenta lo que quieras para los lectores de KDE Blog.

La nube desde hace unos 10 años y los smartphones desde hace unos 5 afectaron bastante a las dinámicas tradicionales del software libre acostumbradas al PC. Con el reciente abandono de Firefox OS, parece que los móviles sólo quedan las alternativas del Ubuntu Touch y el AOSP. Habrá que prestar atención a la evolución de esta realidad, pues hoy en día los móviles son un dispositivo muy extendido, y por tanto clave para el mundo del software libre.

Muchas gracias por tu tiempo y por tus palabras, estoy seguro que los lectores del blog habrán disfrutado tanto de la entrevista como yo y…


This announcement is also available in Spanish and Taiwanese Mandarin.

The latest updates for KDE's Plasma, Applications and Frameworks series are now available to all Chakra users.

Included with this update, is an update of the ncurses, readline and gnutls related group of packages, as well as many other important updates in our core repository. Be aware that during this update, your screen might turn black. If that is the case and it does not automatically restore after some time, then please switch to tty3 with Ctrl+Alt+F3 and then switch back to the Plasma session with Ctrl+Alt+F7. If that does not work, please give enough time for the upgrade to complete before shutting down. You can check your cpu usage using 'top' after logging in within tty3. You can reboot within tty3 using 'shutdown --reboot'.

The Plasma 5.9.2 release provides additional bugfixes to the many new features and changes that were introduced in 5.9.0 aimed at enhancing users' experience:



Applications 16.12.2 include more than 20 recorded bugfixes and improvements to 'kdepim, dolphin, kate, kdenlive, ktouch, okular, among others.'.

Frameworks 5.31.0 include python bindings to many modules in addition to the usual bugfixes and improvements.

Other notable package upgrades and changes:

[core]
alsa-utils 1.1.3
bash 4.4.005
binutils 2.27
dhcpcd 6.11.5
dnsutils 9.11.1
ffmpeg 2.8.11
gawk 4.1.4
gdb 7.12
gnutls 3.5.8: If you have local or CCR packages that require it, they might need a rebuild
gstreamer 1.10.3
gutenprint 5.2.12
hunspell 1.6.0
jack 0.125.0
kdelibs 4.14.29
make 4.2.1
mariadb 10.1.21
mplayer 37916
ncurses 6.0+20170204: If you have local or CCR packages that require it, they might need a rebuild
php 7.0.15
postgresql 9.6.1
python2 2.7.13
readline 7.0.001: If you have local or CCR packages that require it, they might need a rebuild
samba 4.5.3
sqlite3 3.16.0
texinfo 6.3
util-linux 2.29
vim 8.0.0142
wpa_supplicant 2.6

[desktop]
fcitx-qt 5 1.1.0
libreoffice 5.2.5
nano 2.7.4
wireshark 2.2.4
qemu 2.8.0
screen 4.5.0

[gtk]
filezilla 3.24.0
thunderbird-kde 45.7.1

[lib32]
wine 2.2

It should be safe to answer yes to any replacement question by Pacman. If in doubt or if you face another issue in relation to this update, please ask or report it on the related forum section.

Most of our mirrors take 12-24h to synchronize, after which it should be safe to upgrade. To be sure, please use the mirror status page to check that your mirror synchronized with our main server after this announcement.

February 20, 2017

The next OpenStack Summit takes place in Boston, MA (USA) in May (8.-11.05.2017). The "Vote for Presentations" period started already. All proposals are now again up for community votes. The period will end February 21th at 11:59pm PST (February 22th at 8:59am CEST).

This time I have submitted a proposal together with WDLabs:

  • Next Generation Hardware for Software Defined Storage - Software Defined Storage like Ceph has changed the storage market dramatically in the last few years. While software has changed, storage hardware stayed basically the same: commodity servers connected to JBODs utilizing SAS/SATA devices. The next step must be a revolution in the hardware too. At the Austin summit the Ceph community presented a 4 PB Ceph cluster comprised of WDLabs Converged Microservers. Each Microserver is built by starting with an HGST HE8 HDD platform and adding an ARM and DDR running Linux on the drives itself. WDLabs provided access to early production devices for key customers such as Deutsche Telekom for adoption and feedback. This talk will provide insight into our findings running a Ceph cluster on this platform as a storage provider to OpenStack.
This period the voting process changed again unique URLs to proposals seems to work again. So if you would like to vote for my talk use this link or search for the proposal (e.g. use the title from above or search for "Al-Gaaf"). As always: every vote is highly welcome! 

As the last times I highly recommend to search also for "Ceph" or what ever topic your are interested in. You find the voting page here with all proposals and abstracts. I'm looking forward to see if and which of these talks will be selected.

Last year, three new umbrella organisations for free and open-source software (and hardware) projects emerged in Europe. Their aim is to cater to the needs of the community by providing a legal entity for projects to join, leaving the projects free to focus on technical and community tasks. These organisations (Public Software CIC, [The Commons Conservancy], and the Center for the Cultivation of Technology) will take on the overhead of actually running a legal entity themselves.

Among other services, they offer to handle donations, accounting, grants, legal compliance, or even complex governance for the projects that join them. In my opinion (and, seemingly, theirs) such services are useful to these kinds of projects; some of the options that these three organisations bring to the table are quite interesting and inventive.

The problem

As a FOSS or OSHW project grows, it is likely to reach a point where it requires a legal entity for better operation – whether to gather donations, pay for development, handle finances, organise events, increase license predictability and flexibility by consolidating rights, help with better governance, or for other reasons. For example, when a project starts to hold assets – domain names, trade marks, or even just receives money through donations – that should not be the responsibility of one single person, but should, instead, be handled by a legal entity that aligns with the project’s goals. A better idea is to have an entity to take over this tedious, but needed, overhead from the project and let the contributors simply carry on with their work.

So far, the options available to a project are either to establish its own organisation or to join an existing organisation, neither of which may fit well for the project. The existing organisations are either specialised in a specific technology or one of the few technology-neutral umbrella organisations in the US, such as Software in the Public Interest, the Apache Software Foundation, or the Software Freedom Conservancy (SFC). If there is already a technology-specific organisation (e.g. GNOME Foundation, KDE e.V., Plone Foundation) that fits a project’s needs, that may well make a good match.

The problem with setting up a separate organisation is that it takes ongoing time and effort that would much better be spent on the project’s actual goals. This goes double and quadruple for running it and meeting with the annual official obligations – filling out tax forms, proper reporting, making sure everything is in line with internal rules as well as laws, and so on. To make matters worse, failure to do so might result in personal liability for the project leaders that can easily reach thousands or tens of thousands of euros or US dollars.

Cross-border donations are tricky to handle, can be expensive if a currency change is needed, and are rarely tax-deductible. If a project has most of its community in Europe, it would make sense to use a European legal entity.

What is common between all three new European organisations is that none demand a specific outbound license for the projects they manage (as opposed to the Apache Software Foundation, for example), as long as it falls under one of the generally accepted free and open licenses. The organisations must also have internal rules that bind them to act in the public interest (which is the closest approximation to FOSS you can get when it comes to government authorities). Where they differ is the set of services they offer and how much governance oversight they provide.

Public Software CIC

Public Software CIC incorporated in February 2016 as a UK-based Community Interest Company. It is a fiduciary sponsor and administrative service provider for free and open source projects – what it calls public software – in Europe.

While it is not for profit, a Community Interest Company (CIC) is not a charity organisation; the other two new organisations are charities. In the opinion of Public Software’s founders, the tax-deductibility that comes with a charitable status does not deliver benefits that outweigh the limitations such a status brings for smaller projects. Tax recovery on cross-border charitable donations is hard and expensive even where it is possible. Another typical issue with charities is that even when for-profit activities (e.g. selling T-shirts) are allowed, these are throttled by law and require more complex accounting – this situation holds true both for most European charities and for US 501(c)(3) charitable organisations.

Because Public Software CIC is not a charity, it is allowed to trade and has to pay taxes if it has a profit at the end of its tax year. But as Simon Phipps, one of the two directors, explained at a panel at QtCon / FSFE Summit in September 2016, it does not plan to have any profits in the first place, so that is a non-issue.

While a UK CIC is not a charity and can trade freely, by law it still has to strictly act for public benefit and, for this reason, its assets and any trading surplus are locked. This means that assets (e.g. trade marks, money) coming into the CIC are not allowed to be spent or used otherwise than in the interests of the public community declared at incorporation. For Public Software, this means the publicly open communities using and/or developing free and open-source software (i.e. public software). Compliance with the public interest for a CIC also involves approval and monitoring by the Commissioner for Community Interest Companies, who is a UK government official.

The core services Public Software CIC provides to its member projects are:

  • accounting, including invoicing and purchasing
  • tax compliance and reporting
  • meeting legal compliance
  • legal, technical, and governance advice

These are covered by the base fee – 10% of project’s income. This percentage seems to have become the norm (e.g. SFC charges the same). Public Software will also offer additional services (e.g. registering and holding a trade mark or domain name), but for these there will be additional fees to cover costs.

On the panel at QtCon, Phipps mentioned that it would also handle grants, including coordinating and reminding its member projects of deadlines to meet. But it would not write reports for the grants nor would it give loans against future payments from grants. Because many (especially EU) grants only pay out after the sponsored project comes to fruition, a new project that is seeking these grants should take this restriction into consideration.

Public Software CIC already hosts a project called Travel Spirit as a member and has a few projects waiting in the pipeline. While its focus is mainly on newly starting projects, it remains open to any project that would prefer a CIC. At QtCon, Phipps said that he feels it would be the best fit for smaller-scale projects that need help with setting up governance and other internal project rules. My personal (and potentially seriously wrong) prediction is that Public Software CIC would be a great fit for newly-established projects where a complex mishmash of stake holders would have to be coordinated – for example public-private collaborations.

A distinct feature of Public Software CIC is that it distinguishes between different intangible assets/rights and has different rules for them. The basic premise for all asset types is that no other single organisation should own anything from the member project; Public Software is not interested in being a “front” for corporate open source. But then the differences begin. Public Software CIC is perfectly happy and fit to hold trade marks, domain names, and such for its member projects (in fact, if a project holds a trade mark, Public Software would require a transfer). But on the other hand, it holds a firm belief that copyright should not be aggregated by default and that every developer should hold the rights to their own contribution if they are willing.

Apart from FOSS, the Public Software CIC is also open to open-source hardware or any free-culture projects joining. The ownership constraint might in practice prove troublesome for hardware projects, though.

Public Software CIC does not want to actively police license/copyright enforcement, but would try to assist a member project if it became necessary, as far as funds allowed. In fact when a project signs the memorandum of understanding to join the Public Software CIC, the responsibility for copyright enforcement explicitly stays with the project and is not transferred to the CIC. On the other hand, it would, of course, protect the other assets that it holds for a project (e.g. trade marks).

If a project wants to leave at some point, all the assets that the CIC held for it have to go to another asset-locked organisation approved by the UK’s Commissioner of CICs. That could include another UK CIC or charity, or an equivalent entity elsewhere such as a US 501(c)(3).

If all goes wrong with the CIC – due to a huge judgment against one of its member projects or any other reason – the CIC would be wound down and all the remaining member projects would be spun out into other asset-locked organisation(s). Any remaining assets would be transferred to the FSFE, which is also a backer of the CIC.

[The Commons Conservancy]

[The Commons Conservancy] (TCC) incorporated in October 2016 and is an Amsterdam-based Stichting, which is a foundation under Dutch law. TCC was set up by a group of technology veterans from the FOSS, e-science, internet-community, and digital-heritage fields. Its design and philosophy reflects lessons learned in over two decades of supporting FOSS efforts of all sizes in the realm of networking and information technology. It is supported by a number of experienced organisations such as NLnet Foundation (a grant-making organisation set up in the 1980s by pioneers of the European internet) and GÉANT (the European association of national education and research networks).

As TCC’s chairman Michiel Leenaars pointed out in the QtCon panel, the main goal behind TCC is to create a no-cost, legally sound mechanism to share responsibility for intangible assets among developers and organisations, to provide flexible fund-raising capabilities, and to ensure that the projects that join it will forever remain free and open. For that purpose it has invented some rather ingenious methods.

TCC concentrates on a limited list of services it offers, but wants to perfect those. It also aims at being lightweight and modular. As such, the basic services it offers are:

  • assurance that the intangible assets will forever remain free and open
  • governance rules with sane defaults (and optional additions)
  • status to receive charitable donations (to an account at a different organisation)

TCC requires from its member projects only that their governance and decision-making processes are open and verifiable, and that they act in the public benefit. For the rest, it allows the member projects much freedom and offers modules and templates for governance and legal documents solely as an option. The organisation strongly believes that decisions regarding assets and money should lie with the project, relieving the pressure and dependency on individuals. It promotes best practices but tries to keep out of the project’s decisions as much as possible.

TCC does not require that it hold intangible assets (e.g. copyrights, trade marks, patents, design rights) of projects, but still encourages that the projects transfer them to TCC if they want to make use of the more advanced governance modules. The organisation even allows the project to release binaries under a proprietary license, if needed, but under the strict condition that a full copy of the source code must forever remain FOSS.

Two of the advanced modules allow for frictionless sharing of intangible assets between member projects regardless whether the outbound licenses of these projects are compatible or not. The “Asset Sharing DRACC”] (TCC calls its documents “Directives and Regulatory Archive of [The Commons Conservancy]” or DRACC) enables developers to dedicate their contributions to several (or all) member projects at the same time. The “Programme Forking DRACC” enables easy sharing of assets between projects when a project forks, even though the forks might have different goals and/or outbound licenses.

As further example, the “Hibernation of assets DRACC” solves another common issue – namely how to ensure a project can flourish even after the initial mastermind behind it is gone. There are countless projects out there that stagnated because their main developer lost interest, moved on, or even died. In this module there are special rules in place to handle a project that has fallen dormant and how the community can revive a project afterwards to simply continue the development. There are more such optional rule sets available for projects to adopt; including rules how to leave TCC and join a different organisation.

This flexibility is furthered by the fact that by design TCC does not tie the project to any money-related services. To minimise risks, [The Commons Conservancy] does not handle money at all – its statutes literally even forbid it to open a bank account. Instead, it is setting up agreements with established charitable entities that are specialised in handling funds. The easiest option would be to simply use one of these charities to handle the project’s financial back-end (e.g. GÉANT has opted for NLnet Foundation), but projects are free to use any other financial back-end if they so desire.

Not only is the service TCC offers compatible with other services, it is also free as in beer, so using TCC’s services in parallel with some other organisation to handle the project’s finances does not increase a project’s costs.

TCC is able to handle projects that receive grants, but will not manage grants itself. There are plans to set up a separate legal entity to handle grants and other activities such as support contracts, but nothing is set in stone yet. For at least a subset of projects it would also be possible to apply for loans in anticipation of post-paid (e.g. EU) grants through NLnet.

A project may easily leave TCC whenever it wants, but there are checks and balances set in place to ensure that the project remains free and open even if it spins out to a new legal entity. An example is that a spun out (or “Graduated” as it is called in TCC) project leaves a snapshot of itself with TCC as a backup. Should the new entity fail, the hibernated snapshot can then be revived by the community.

TCC is not limited to software – it is very much open to hosting also open hardware and other “commons” efforts such as open educational resources.

TCC does not plan to be involved in legal proceedings – whether filing or defending lawsuits. Nor is it an interesting target, simply because it does not take in or manage any money. If anything goes wrong with a member project, the plan is to isolate that project into a separate legal entity and keep a (licensed) clone of the assets in order to continue development afterwards if possible.

Given the background of some of the founders of TCC (with deep roots in the beginnings of the internet itself), and the memorandum of understanding with GÉANT and NREN, it is not surprising that some of the first projects to join are linked to research and core network systems (e.g. eduVPN and FileSender). Its offering seems to be an interesting framework for already existing projects that want to ensure they will remain free and open forever; especially if they have or anticipate a wider community of interconnected projects that would benefit from the flexibility that TCC offers.

The Center for the Cultivation of Technology

The Center for the Cultivation of Technology (CCT) also incorporated in October 2016, as a German gGmbH, which is a non-profit limited-liability corporation. Further, the CCT is fully owned by the Renewable Freedom Foundation.

This is an interesting set-up, as it is effectively a company that has to act in public interest and can handle tax-deductible donations. It is also able to deal with for-profit/commercial work, as long as the profit is reinvested into its activities that are in public benefit. Regarding any activities that are not in the public interest, CCT would have to pay taxes. Of course, activities in the public interest have to represent the lion’s share in CCT.

Its owner, the Renewable Freedom Foundation, in turn is a German Stiftung (i.e. foundation) whose mission is to “protect and preserve civil liberties, especially in the digital landscape” and has already helped fund projects such as Tor, GNUnet, and La Quadrature du Net.

While a UK CIC and a German gGmbH are both limited-liability corporations that have to act in the public interest, they have somewhat different legal and tax obligations and each has its own specifics. CCT’s purpose is “the research and development of free and open technologies”. For the sake of public authorities it defines “free and open technologies” as developments with results that are made transparent and that, including design and construction plans, source code, and documentation, are made available free and without licensing costs to the general public. Applying this definition, the CCT is inclusive of open-source hardware and potentially other technological fields.

Similar to the TCC, the CCT aims to be as lightweight by default as possible. The biggest difference, though, is that the Center for the Cultivation of Technology is first and foremost about handling money – as such its services are:

  • accounting and budgeting
  • financial, tax and donor reporting
  • setting up and managing of donations (including crowd-funding)
  • grant management and reporting
  • managing contracts, employment and merchandise

The business model is similar to that of PS CIC in that, for basic services, CCT will be taking 10% from incoming donations and that more costly tasks would have to be paid separately. There are plans to eventually offer some services for free, which would be covered by grants that CCT would apply for itself. In effect, it wants to take over the whole administrative and financial overhead from the project in order to allow the projects to concentrate on writing code and managing themselves.

Further still, the CCT has taken upon itself automation, as much as possible, both through processes and software. If viable FOSS solutions are missing, it would write them itself and release the software under a FOSS license for the benefit of other FOSS legal entities as well.

As Stephan Urbach, its CEO, mentioned on the panel at QtCon, the CCT is not just able to handle grants for projects, but is also willing to take over reporting for them. Anyone who has ever partaken in an EU (or other) grant probably agrees that reporting is often the most painful part of the grant process. The raw data for the reports would, of course, still have to be provided by the project itself. But the CCT would then take care of relevant logistics, administration, and writing of the grant reports. The company is even considering offering loans for some grants, as soon as enough projects join to make the operations sustainable.

In addition, the Center for the Cultivation of Technology has a co-working office in Berlin, where member projects are welcome to work if they need office space. The CCT is also willing to facilitate in-person meetings or hackathons. Like the other two organisations, it has access to a network of experts and potential mentors, which it could resort to if one of its projects needed such advice.

Regarding whether it should hold copyright or not, the Center for the Cultivation of Technology is flexible, but at the very beginning it would primarily offer holding other intangible assets, such as domain names and trade marks. That being said, at least in the early phase of its existence, holding and managing copyright is not the top priority. Therefore the CCT has for now deferred the decision regarding its position on license enforcement and potential lawsuit strategy. Accounting, budgeting, and handling administrative tasks, as well as automation of them all, are clearly where its strengths lie and this is where it initially wants to pour most effort into.

Upon a dissolution of the company, its assets would fall to Renewable Freedom Foundation.

Since the founders of CCT have deep roots in anonymity and privacy solutions such as Tor, I imagine that from those corners the first wave of projects will join. As for the second wave, it seems to me that CCT would be a great choice for projects that want to offload as much of financial overhead as possible, especially if they plan to apply for grants and would like help with applying and reporting.

Conclusion

2016 may not have been the year of the Linux desktop, but it surely is the year of FOSS umbrella organisations. It is an odd coincidence that at the same time three so different organisations have popped up in Europe – initially oblivious of each other – to provide much-needed services to FOSS projects.

Not only are FOSS projects spoiled for choice regarding such service providers in Europe, now, but it is refreshing to see that these organisations get along so well from the start. For example, Simon Phipps is also an adviser at CCT and I help with both CCT and TCC.

In fact, I would not be surprised to see, instead of bitter competition, greater collaboration between them, allowing each to specialise in what it does best and allowing the projects to mix-and-match services between them. For example, I can see how a project might want to pick TCC to handle its intangible assets, and at the same time use CCT to handle its finances. All three organisation have also stated that, should a project contact them that they feel would be better handled by one of the others, they would refer it to that organisation instead.

Since at least the legal and governance documents for CCT and TCC will be available on-line under a free license (CC0-1.0 and CC-By-4.0 respectively), cross-pollination of ideas and even setting up of new organisations would hereby be made easier. It may be early days for these three umbrella organisations, but I am quite optimistic about their usefulness and that they will fill in the gaps left open by the older US siblings and single-project organisations.

Update: TCC’s DRACC are already publicly available on-line.

If a project comes to the conclusion that it might need a legal entity, now is a great time to think about it. At FOSDEM 2017 there was another panel with CCT, TCC, PS CIC, and SFC where further questions and comments were asked.


Disclaimer: At the time of writing, I am working closely with two of the organisations – as the General Counsel of the Center for the Cultivation of Technology, and as co-author of the legal and governance documents (the DRACC) of [The Commons Conservancy]. This article does not constitute the official position of either of the two organisations nor any other I might be affiliated with.

Note: This article first appeared in LWN on 1 February 2017. This here is a slightly modified and updated version of it.


hook out → coming soon: extremely exciting stuff regarding the FLA 2.0


A lab running Thomas' current rollout of Plasma 4 With Plasma 5 having reached maturity for widespread use we are starting to see rollouts of it in large environments. Dot News interviewed the admin behind one such rollout in Austrian schools.

Please introduce yourself and your work

Hi, my name is Thomas Weissel. Among many other things I'm a free open source software enthusiast, teacher, web developer and father - not necessarily in that particular order. I studied computer science in Vienna/Austria at the TU Wien and I teach computer science, philosophy and psychology for living. Currently i am working on a secure exam environment for Austrian schools based on Linux and KDE Plasma.

You say you will roll out Plasma into your school. Which users will get it?

About 34 classrooms, 2 consulting rooms, the room for teachers and one computer lab just got upgraded to a custom "distribution" based on Kubuntu and KDE neon. At least 75 teachers are going to work with the system. Most of the 700+ students are not going to touch these computers (because they are locked away) but in their 5th grade every one of them gets a live USB flash-drive in order to work with the very same system in the computer lab. The system has been extended by a lot of custom applications to allow students for example to copy their bootable USB flash-drives with a mouse click or to reset the configuration to the defaults. Next week I'm going to make the basic system "life" bundled with the secure exam environment "life-exam" available online and I hope many other people (schools) are going to use the system in the future.

What hardware do you use?

In most classrooms we still have aged Asus eee PCs. We switched to more powerful Acer laptops with 4-8 GB of memory for new acquisitions. One of our computer labs just got an upgrade to new HP desktop PCs with big Samsung screens. On these computers everything works like charm.

What distro will you use?

KDE neon !

What problems do you anticipate as part of installing Plasma?

We had a slight problem "mirroring" the displays to the projector without losing the configured widgets but this bug is fixed now in plasma 5.9.2 thanks to Marco Martin. Other than that getting rid of problems was the reason why i migrated to Linux in the first place. For one and a half years now we are working with Linux and Plasma 4 in the classrooms and from a system-administrator's point of view the migration was a huge success. Three to five support calls every week because of weird system problems with Windows 7 suddenly were reduced to one or two per week but not a single one was due to a problem with the system itself. We used live USB flash drives in the classrooms and the teachers unplugged them all the time despite a big sticker with a "do not remove" warning. That was the source for those support calls. We fixed that by installing the system to the hard drive last week :-) The only problem i anticipate now is not with Plasma but with the office suite. We had a lot of conversion (layout) problems with docx, pptx, and xlsx. One source of the problem is the extensive use of proprietary fonts like "Calibri". Automatically replacing "Calibri" with "Carlito" (metric-compatible) is a good start but a lot of the problems remain. I installed Word Online and Excel Online as Chrome-Apps to work around this problem. Most Teachers just installed LibreOffice to make sure everything works well but PowerPoint is still a better program than Impress in my opinion. WPS Office Presentation is very good alternative for pptx files (but not free as in free speech).

How did you pick Plasma rather than any other desktop or operating system?

As well as all the small problems with our Windows installations, hours lost in updating Java, Flash, Quicktime, Silverlight and so on, Microsoft turned off the KMS server in Vienna and this introduced new problems with the key management service. Let's make it short -- I wanted to get rid of Windows in the classroom and enforce free and open standards. I have this weird belief that proprietary pseudo-standards like OOXML Transitional and expensive software like Photoshop, MS Office and so on have no reason for existence in public schools. Therefore Gimp, Calligra Suite and LibreOffice took over and the world keeps spinning. I bet on Plasma because I can easily make it work and look like Windows 7 and this was very important for the acceptance of the teachers. I also chose Plasma because I wanted to present the best possible and most customizable desktop to the students. I wanted them to like working with the system and Plasma made that easy. The first hour working with students is all about 3D effects, custom fonts, widgets and custom themes. After half an hour every single student desktop looks completely different and the students start to see it as "their own" system. In the classrooms this is different of course. It is absolutely necessary that everyone leaves the computer in a usable state for the next teacher. That's yet another reason why i picked Plasmashell: The KIOSK system. I reported a lot of issues with the KIOSK system and Plasma developers did an amazing job finding and fixing all the bugs i've found for 5.8. We now have a desktop that is completely locked to make sure nobody accidentally removes or reconfigures important parts of the user interface.

What applications will you run with it?

The whole list is too long for this interview. In the classrooms LibreOffice and Firefox are probably the most used applications. In the computer lab we start programming in Scratch (Byob) - later we code in Kate, edit photos in Gimp, animate in Synfig Studio. The school's OwnCloud server is widely used to sync and access private files.

What has the reaction been from your users so far?

Most students just don't care - some are completely hooked because of the endless possibilities you have with Plasma and Linux - others just install Steam and Minecraft on their flash drives and are satisfied. The teachers don't care either. I think most of them didn't even realize that i switched the operating system underneath the user software. The only thing they want is their documents to be rendered correctly. As a person who observes this "format war" for many years now i can tell that this problem is not going away. The only "real" solution to this is to stop using those formats and completely switch to the "open document format". Shouldn't be a problem in a public school but the individual vendor lock-in of the teachers is not to be underestimated. Installing Microsoft fonts and the newest version of LibreOffice and teaching the teachers how to export to PDF helped a lot. The idea is that students and teachers are empowered to use the same software they use in school at home without the need to invest a lot of money in order to do so.

What is the attitude to Free and Open Source Software in Austria generally?

The education authority in Lower Austria recommended a Linux based live USB system as well as the Microsoft solution for secure exam environments. There was the LinuxAdvanced project that provided the idea for LIFE and there is the desktop4education project that aims to replace any complex Windows infrastructure and as far as i know the Free Software Foundation is very active in Vienna. Other than that I'd say that the situation in Austria is not really good. Wienux (a selfmade Linux Distribution) that should replace Windows XP in Vienna's administrations was killed before it even started. Schools get Microsoft licenses for Office and Windows whether they want them or not. There are contracts in place that run for 3 years and usually get extended for additional 3 years and so on. There even is a EU directive to use free and open standards wherever possible in public institutions but no one seems to even know (or care) about this.

How can communities like KDE bridge the gap from the enthusiast world to the mass market?

Plasma 5.9 is a wonderful piece of software. KDE Connect is a feature that wows everybody and even NetworkManager is nowadays a tool Windows-admins look at with envy. With Google searching for a way without Linux for their future OS, Apple that is never going to think different and Microsoft going into the cloud with Windows I don't see a world where everybody is using KDE and Linux. But the mass market suitability is already here. In my opinion the way to get to a wider userbase is through public services and schools. There is absolutely no need to use any other software in schools than free open source software. If our schoolchildren realize that they can do everything with free software they will consider using it later in life when they start their own company. IMHO that's the way to go therefore I'm working on it :-)

Discussion about this and similar projects takes place on the KDE Enterprise mailing list.

If you happen to be in Gothenburg on Wednesday you are most welcome to visit foss-gbg. It is a free event (you still have to register so that we can arrange some light food) starting at 17.00.

The topics are Yocto Linux on FPGA-based hardware, risk and license management in open source projects and a product release by the local start-up Zifra (an encryptable SD-card).

More information and free tickets are available at the foss-gbg site.

Welcome!

February 17, 2017

Glad to announce the release of KStars v2.7.4 for Windows 64bit. This version is built a more recent Qt (5.8) and the latest KF5 frameworks for Windows bringing more features and stability.


This release brings in many bugs fixes, enhancements for limited-resources devices, and improvements, especially to KStars premier astrophotography tool: Ekos. Windows users would be glad to learn that they can now use offline astrometry solver in Windows, thanks to the efforts of the ANSVR Local Astrometry.net solver. The ANSVR mimics the astrometry.net online server on your local computer; thus the internet not required for any astrometry queries.

After installing the ANSVR server and downloading the appropriate index files for your setup, you can simply change the API URL to use the ANSVR server as illustrated below:



In Ekos align module, keep the solver type to Online so it would use the local ANSVR server for all astrometry queries. Then you can use the align module as you would normally do. This release also features the Ekos Polar Alignment Assistant tool, a very easy to use spot-on tool to polar align your mount.

Clear skies!

For years I have told people to not start Kate as root to edit files. The normal response I got was “but I have to edit this file”. The problem with starting GUI applications as root is that X11 is extremely insecure and it’s considerable easy for another application to attack this.

An application like Kate depends on libraries such as Qt. Qt itself disallows running as an setuid-app:

Qt is not an appropriate solution for setuid programs due to its large attack surface.

If Qt is not an appropriate solution for command line arguments running as root, it’s also not an appropriate solution for running GUI applications. And Qt is just one of the dependencies of graphical applications. There is obviously also xcb, Xlib, OpenGL, xkbcommon, etc. etc.

So how can another application attack an application running as root? A year ago I implemented a simple proof of concept attack against Dolphin. The attack is waiting for dolphin getting started as root. As soon as it starts, it uses the XTest extension to fake input, enable the embedded konsole window and type into it.

This is just one example. The elephant in the room is string handling, though. Every X11 window has many window properties and every process can write to it. We just have to accept that string handling is complex and can easily trigger a crash.

Luckily there is no need for editing a file to run the editor as root. There is a neat tool called sudoedit. That does the magic of starting the editor as the user and takes care of storing the file as root when you save.

Today I pushed a change for Kate and KWrite which does no longer allow to be run as root. Instead it educates the user about how to do the same with sudoedit.

Now I understand that this will break the workflow for some users. But with a minor adjustment to your workflow you get the same. In fact it will be better, because the Kate you start is able to pick up your configured styling. And it will also work on Wayland. And most importantly it will be secure.

I am also aware that if you run an application which is malicious you are already owned. I think that we should protect nevertheless.

It is available at the usual place https://community.kde.org/Schedules/Applications/17.04_Release_Schedule

Dependency freeze is in 4 weeks and Feature Freeze in 5 weeks, so hurry up!

One can have real pain trying to create a demo setup or proof-of-concept for an embedded device. To ease the pain Qt for Device Creation has a list of supported devices where you can flash a “Boot to Qt” image and get your software running on the target HW literally within minutes.

Background

Back in 2014 we introduced a way to make an Android device boot to Qt without the need of a custom OS build. Android has been ported to several devices and the Android injection method made it possible to get all the benefits of native Qt applications on an embedded device with the adaptation already provided by Android.

The Android injection was introduced using Qt versions 5.3.1. whereas the supported Android versions were 4.2 and 4.4. It is not in our best interest that anyone would be forced to use older version of Qt, nor does it help if the Android version we support does not support the hardware that the developers are planning to use. I have good news as the situation has now changed.

Late last year we realized that there still is demand for Android injection on embedded devices so we checked what it takes to bring the support up to date. The target was to use Qt 5.8 to build Boot to Qt demo application and run it on a device that runs Android 7.0. The device of choice was Nexus 6 smartphone which was one of the supported devices for Android Open Source Project version 7.0.0.

The process

We first took the Android 7.0 toolchain and updated the Qt 5.4 Boot to Qt Android injection source code to match the updated APIs of Android 7.0. Once we could build Qt 5.4 with the toolchain, it was time to patch the changes all the way to Qt 5.8.
Since Qt version 5.4 there has been improved modularity in Qt and it became apparent during the process, e.g. the old Surfaceflinger integration was replaced with a platform plugin.

The results can be seen in the videos below.

The Boot to Qt Android injection is an excellent way to speed up the development and get your software to run on target hardware as early as possible. If you want to know more about the Boot to Qt and Android injection, don’t hesitate to contact us.

The post Boot to Qt on embedded HW using Android 7.0 and Qt 5.8 appeared first on Qt Blog.

The second point release update to our LTS release 16.04 is out now. This contains all the bugfixes added to 16.04 since its first release in April. Users of 16.04 can run the normal update procedure to get these bugfixes. In addition, we suggest adding the Backports PPA to update to Plasma 5.8.5. Read more about it: http://kubuntu.org/news/plasma-5-8-5-bugfix-release-in-xenial-and-yakkety-backports-now/

Warning: 14.04 LTS to 16.04 LTS upgrades are problematic, and should not be attempted by the average user. Please install a fresh copy of 16.04.2 instead. To prevent messages about upgrading, change Prompt=lts with Prompt=normal or Prompt=never in the /etc/update-manager/release-upgrades file. As always, make a thorough backup of your data before upgrading.

See the Ubuntu 16.04.2 release announcement and Kubuntu Release Notes.

Download 16.04.2 images.

You may have heard of Rust by now. The new programming language that "pursuis the trifecta: safety, concurrency, and speed". You have to admit, even if you don't know what trifecta means, it sounds exciting.

I've been toying with Rust for a while and have given a presentation at QtCon comparing C++ and Rust. I've been meaning to turn that presentation into a blog post. This is not that blog post.

Here I show how you can use QML and Rust together to create graphical applications with elegant code. The example we're building is a very simple file browser. People that are familiar with Rust can ogle and admire the QML snippets. If you're a Qt and QML veteran, I'm sure you can read the Rust snippets here quite well. And if you're new to both QML and Rust, you can learn twice as much.

The example here is kept simple and poor in features intentionally. At the end, I'll give suggestions for simple improvements that you can make as an exercise. The code is available as a nice tarball and in a git repo.

Command-line Hello, world!

First we set up the project. We will need to have QML and Rust installed. If you do not have those yet, just continue reading this post and you'll be all the more motivated to go ahead and install them.

Once those two are installed, we can create a new project with Rust's package manager and build tool cargo.

[~/src]$ # Create a new project called sousa (it's a kind of dolphin ;-)
[~/src]$ cargo new --bin sousa
     Created binary (application) `sousa` project

[~/src]$ cd sousa

[~/src/sousa]$ # Add a dependency for the QML bindings version 0.0.9
[~/src/sousa]$ echo 'qml = "0.0.9"' >> Cargo.toml
[~/src/sousa]$ # Build, this will download and compile dependencies and the project.
[~/src/sousa]$ cargo build
    Updating registry `https://github.com/rust-lang/crates.io-index`
   Compiling libc v0.2.20
   Compiling qml v0.0.9
   Compiling lazy_static v0.2.2
   Compiling sousa v0.1.0 (file:///home/oever/src/sousa)
    Finished debug [unoptimized + debuginfo] target(s) in 25.39 secs

[~/src/sousa]$ # Run the program.
[~/src/sousa]$ cargo run
    Finished debug [unoptimized + debuginfo] target(s) in 0.0 secs
     Running `target/debug/sousa`
Hello, world!

The same without output:

cargo new --bin sousa
cd sousa
echo 'qml = "0.0.9"' >> Cargo.toml
cargo build
cargo run

The mix of Rust and QML lives! Of course the program is not using any QML yet. Let's fix that.

Hello, world! with QML

Now that we have a starting point we can start adding some QML. Let's change src/main.rs from a command-line Hello, world to a graphical Hello, world! application.

main.rs before

fn main() {
    println!("Hello, world!");
}

Some explanation for the people reading Rust code for the first time: things that look like functions but have a name that ends with ! are macros. Forget everything you know about C/C++ macros. Macros in Rust are elegant and powerful. We will see this below when we mock moc.

main.rs after

extern crate qml;

use qml::*;

fn main() {
    // Create a new QML Engine.
    let mut engine = QmlEngine::new();

    // Bind a message to the QML enviroment.
    let message = QVariant::from("Hello, world!");
    engine.set_property("message", &message);

    // Load some QML
    engine.load_data("
        import QtQuick 2.0
        import QtQuick.Controls 1.0

        ApplicationWindow {
            visible: true
            Text {
                anchors.fill: parent
                text: message
            }
        }
    ");
    engine.exec();
}

Modules in Rust are called "crates". This example uses QML bindings that currently have version number 0.0.9. So the API may change.

In the example above, the QML is placed literally in the code. Literal strings in Rust can span multiple lines.

Usually you do not need to specify the type of a variable, you can just type let (for immutable objects) or let mut for mutable ones. Like in C++, & is used to pass an object by reference. You have to use the & in the function definition, but also when calling the function (unless your variable is a reference already).

The QML code has an ApplicationWindow with a Text. The message, Hello, world! is passed to the QML environment as a QVariant. This is the first time in our program that information goes between Rust and QML.

Hello, world!

Hello, world!

Like above, the application can be run with cargo run.

Splitting the code

Let's make this code a bit more maintainable. The QML is moved to a separate file src/sousa.qml which we load from Rust.

import QtQuick 2.0
import QtQuick.Controls 1.0

ApplicationWindow {
    visible: true
    Text {
       anchors.fill: parent
       text: message
    }
}

You can see the adapted Rust code below. In debug mode, the file is read from the file system. In release mode, the file is embedded into the executable to make deployment easier.

extern crate qml;

use qml::*;

fn main() {
    // Create a new QML Engine.
    let mut engine = QmlEngine::new();

    // Bind a message to the QML enviroment.
    let text = QVariant::from("Hello, world!");
    engine.set_property("message", &text);

    // Load some QML
#[cfg(debug_assertions)]
    engine.load_file("src/sousa.qml");
#[cfg(not(debug_assertions))]
    engine.load_data(include_str!("sousa.qml"));
    engine.exec();
}

The statements #[cfg(debug_assertions)] and #[cfg(not(debug_assertions))] are conditional compilation for the next expression. So when you run cargo run, the QML file will be read from disk and with cargo run --release, the QML will be inside the executable. In debug mode it is convenient to avoid recompilation for changes to the QML code.

Listing the contents of a folder

Now that we've created an application that combines Rust and QML let's go a step further and list the contents of a directory instead of a simple message.

QML has a ListView that can display the contents of a ListModel. The ListModel can be filled by the Rust code. First we create a simple Rust structure that contains information about files.

Q_LISTMODEL_ITEM!{
    pub QDirModel<FileItem> {
        file_name: String,
        is_dir: bool,
    }
}

Q_LISTMODEL_ITEM! ends on a !, so it's a macro. Rust macros use pattern matching on the content of a macro. The matched values are used to generate code. The macro system is not unlike C++ templates, but with a more flexible sytax and simpler rules.

On the QML side, we'd like to show the file names. Directory names should be shown in italic.

ApplicationWindow {
    visible: true

    ListView {
        anchors.fill: parent
        model: dirModel
        delegate: Text {
            text: file_name
            font.italic: is_dir
        }
    }
}

The ListView shows data from a ListModel that we'll define later.

The delegate in the ListView is a kind of template. When an entry in the list is visible in the UI, the delegate is the UI component that shows that entry. The delegate that is shown here is very simple. It is just a Text that shows the file name.

Next, we need to connect the information on the file_system to the model. That is done in two steps.

Instead of binding a Hello, world! message to the QML environment, we create an instance of our QDirModel and bind it to the QML environment.

    // Create a model with files.
    let dir_str = ".";
    let current_dir = fs::canonicalize(dir_str)
        .expect(&format!("Could not canonicalize {}.", dir_str));
    let mut dir_model = QDirModel::new();
    list_dir(&current_dir, &mut dir_model).expect("Could not read directory.");
    engine.set_and_store_property("dirModel", &dir_model);

The model is initialized with the current directory. That directory is canonicalized. That means it is made absolute and symbolic links are resolved. This function may fail and Rust forces us to deal with that. If there is an error in fs::canonicalize(dir_str), the returned result is an error instead of a value. The function expect() takes the error and an additional message, prints it and stops the current thread or program in a controlled way. Rust is a safe programming language because of features like this where potential problems are prevented at compile-time.

The last missing piece is the function list_dir that reads entries in a directory and places them in the QDirModel.

fn list_dir(dir: &Path, model: &mut QDirModel) -> io::Result<()> {
    // get iterator over readable entries
    let entry_iter = fs::read_dir(dir)?.filter_map(|e| e.ok());

    model.clear();
    for entry in entry_iter {
        if let Ok(metadata) = entry.metadata() {
            if let Ok(file_name) = entry.file_name().into_string() {
                model.append_item(FileItem {
                    file_name: file_name,
                    is_dir: metadata.is_dir(),
                });
            }
        }
    }
    Ok(())
}

There is a lot happening in the first line of this function. An iterator is taken over the contents of a directory. If the reading of the directory fails, the function stops and returns an Err. This is coded by the ? in fs::read_dir(dir)?. When reading each entry, another error may occur. If that happens the iterator returns an Err. We choose here to skip over the erroneous reads; we filter them out with filter_map(|e| e.ok()).

Next, the entries are added to the model in a for loop. Again we see code that deals with possible errors. Reading the metadata for a file may give an error. We choose to skip entries with such errors. Only the entries for which Ok is returned are handled.

The UI should display the file name. Rust uses UTF-8 internally and the file name can be be nearly any sequence of bytes. If the entry is not a valid UTF-8 string, we ignore that entry here. Another option would be to keep the byte array (Vec<u8>) and use a lossy representation of the file name in the user interface that leaves out the parts that cannot be represented in UTF-8.

In other programming languages, it'd be easier to handle these cases sloppily. In Rust we have to be explicit. This explicit code is safer and more understandable for the next programmer reading it.

And here is the result of cargo run. A directory listing with two files and two folders.

a listing of files

a listing of files

A simple file browser

Listing only one fixed directory is no fun. We want to navigate to other directories by clicking on them. We'd like to have an object that can receive the name of a folder that it should enter and update the directory listing.

To achieve that we need a staple from the Qt stable: QObject. A QObject can send signals and receive signals. Signals are received in slots. When programming in C++, a special step is needed during compilation: the program moc generates code from the C++ headers.

Thanks to macroergonomics, Rust has more powerful macros and can skip this extra step. The syntax to define a QObject is simple in Rust and C++. This is our QDirLister:

pub struct DirLister {
    current_dir: PathBuf,
    model: QDirModel,
}
Q_OBJECT!{
    pub DirLister as QDirLister {
        signals:
        slots:
            fn change_directory(dir_name: String);
        properties:
    }
}

The macro Q_OBJECT takes the struct DirLister and wraps it in another struct QDirLister that has signals, slots and properties.

Our simple QDirLister defines only one slot, change_directory, that will receive signals from the QML code when a directory name is clicked. Here is the implementation:

impl QDirLister {
    fn change_directory(&mut self, dir_name: String) -> Option<&QVariant> {
        let new_dir = if dir_name == ".." {
            // go to parent if there is a parent
            self.current_dir.parent().unwrap_or(&self.current_dir).to_owned()
        } else {
            self.current_dir.join(dir_name)
        }; 
        if let Err(err) = list_dir(&new_dir, &mut self.model) {
            println!("Error listing {}: {}",
                     self.current_dir.to_string_lossy(),
                     err));
            return None;
        }
        // listing the directory succeeded so update the current_dir
        self.current_dir = new_dir;
        None
    }
}

If the directory is .., we move up one directory with parent(). Again we have to explicitly handle the case that there is no parent directory. We choose to stay on the same directory in that case.

If the directory is not .., we join() the directory name to the current_dir. We update the model with a new directory listing and print an error and stay on the current directory if that fails.

QDirLister has to be hooked up to the QML code. We add this snippet to the fn main() that we defined earlier.

    // Create a DirLister and pass it to QML
    let dir_lister = DirLister {
        model: dir_model,
        current_dir: current_dir.into(),
    };
    let q_dir_lister = QDirLister::new(dir_lister);
    engine.set_and_store_property("dirLister", q_dir_lister.get_qobj());

And this is how we use it from QML:

import QtQuick 2.0
import QtQuick.Controls 1.0

ApplicationWindow {
    visible: true

    ListView {
        anchors.fill: parent
        model: dirModel
        delegate: Text {
            text: file_name
            font.italic: is_dir

            MouseArea {
                anchors.fill: parent
                cursorShape: is_dir ? Qt.PointingHandCursor : Qt.ArrowCursor
                onClicked: {
                    if (is_dir) {
                        dirLister.change_directory(file_name);
                    }
                }
            }
        }
    }
}

To receive mouse input in QML, there needs to be a MouseArea. When it is clicked (onClicked), it calls a bit of Javascript that sends the file_name to the dirLister via the slot change_directory.

our file browser

our file browser

Conclusion

Hooking up QML and Rust is elegant. We've created a simple file browser with one QML file, sousa.qml, one Rust file, main.rs and one package/build file Cargo.toml.

There are many nice QML user interfaces out there that can be repurposed on top of Rust code. QML can be visually edited with QtCreator. QML can be used for mobile and desktop applications. It's very nice that this wonderful method of creating user interfaces can be used with Rust.

To the C++ programmers: I hope you enjoyed the Rust code and find some inspiration from it. Because Rust is a new language it can introduce innovative features that cannot be easily added to C++. Rust and C++ can be mixed in one codebase as is done in Firefox.

Rust has many more wonderful features than can be covered in this blog. You can read more in the Rust book.

Assignments

I promised some assignments. Here they are.

  1. Show an error dialog when a directory cannot be shown. (Hint: the code is already in the git repo and shows a QML feature that we did not use yet: signals.

  2. Show the file size in the file listing.

  3. Do not make directories clickable if the user has no permission to open them.

  4. Open simple files like pictures and text files when clicked by showing them in a separate pane.

February 16, 2017

Yes, it’s not a typo.

Thanks to the last batch of improvements and with the great help of jemalloc, cutelyst-wsgi can do 100k request per second using a single thread/process on my i5 CPU. Without the use of jemalloc the rate was around 85k req/s.

This together with the EPoll event loop can really scale your web application, initially I thought that the option to replace the default glib (on Unix) event loop of Qt had no gain, but after increasing the connection number it handle them a lot better. With 256 connections the request per second using glib event loop get’s to 65k req/s while the EPoll one stays at 90k req/s a lot closer to the number when only 32 connections is tested.

Beside these lovely numbers Matthias Fehring added a new Session backend memcached and a change to finally get translations to work on Grantlee templates. The cutelyst-wsgi got –socket-timeout, –lazy, many fixes, removal of usage of deprecated Qt API, and Unix signal handling seems to be working properly now.

Get it! https://github.com/cutelyst/cutelyst/archive/r1.4.0.tar.gz

Hang on FreeNode #cutelyst IRC channel or Google groups: https://groups.google.com/forum/#!forum/cutelyst

Have fun!


When setting up a German Equatorial Mount (GEM) for imaging, a critical aspect of capturing long-exposure images is to ensure a proper polar alignment. A GEM mount has two axis: Right Ascension (RA) axis and Declination (DE) axis. Ideally, the RA axis should be aligned with the celestial sphere polar axis. A mount's job is to track the stars motion around the sky, from the moment they rise at the eastern horizon, all the way up across the median, and westward until they set.


In long exposure imaging, a camera is attached to the telescope where the image sensor captures incoming photons from a particular area in the sky. The incident photons have to strike the same photo-site over and over again if we are to gather clear and crisp image. Of course, actual photons do not behave in this way: optics, atmosphere, seeing quality all scatter and refract photons in one way or another. Furthermore, photons do not arrive uniformly but follow a Poisson distribution. For point-like sources like stars, a point spread function describes how photons are spatially distributed across the pixels. Nevertheless, the overall idea we want to keep the source photons hitting the same pixels. Otherwise, we might end up with an image plagued with various trail artifacts.

Since mounts are not perfect, they cannot perfectly keep track of object as it transits across the sky. This can stem from many factors, one of which is the mis-alignment of the mount's Right Ascension axis with respect to the celestial pole axis. Polar alignment removes one of the biggest sources of tracking errors in the mount, but other sources of error still play a factor. If properly aligned, some mounts can track an object for a few minutes with only deviation of 1-2 arcsec RMS.

However, unless you have a fancy top of the line mount, then you'd probably want to use an autoguider to keep the same star locked in the same position over time. Despite all of this, if the axis of the mount is not properly aligned with the celestial pole, then even a mechanically-perfect mount would lose tracking with time. Tracking errors are proportional to the magnitude of the misalignment. It is therefore very important for long exposure imaging to get the mount polar aligned to reduce any residual errors as it spans across the sky.

Several polar-alignment aids exist today, including, but not limited to:

1. Polar scope built-in your mount.
2. Using drift alignment from applications like PHD2.
3. Dedicated hardware like QHY's PoleMaster.
4. Ekos Legacy Polar Alignment tool: You need to take exposure of two different points in the sky to measure the drift and find out polar error in each axis (Altitude & Azimuth)
5. SharpCap Polar Alignment tool.

Out of the above, the easiest to use are probably QHY's PoleMaster and SharpCap's Polar alignment tool. However both software are exclusive to Windows OS only. KStars users have long requested support for an easy to use Polar Alignment helper in Ekos leveraging its astrometry.net backend.

During the last couple of weeks, I worked on developing Ekos Polar Alignment Assistant Tool (PAA). I started with a simple mathematical model consisting of two images rotated by a an arbitrary degree. A sample illustration of this is below:



Given two points, we can calculate the arc length from the rotation angle, and hence the radius. Therefore, it is possible to find two circle solutions that would match this, one of which would be the mount's actual RA axis within the image. Finding out which solution is the correct one turned out to be challenging, and even the mount's own rotation angle cannot be fully trusted. To be able to uniquely draw a circle, you need 3 points. So it was suggested by Gerry Rozema, one of INDI venerable developers, to capture 3 images to uniquely identify the circle without involving a lot of fancy math.

Since it relies on astrometry.net, PAA has more relaxed requirements than other tools making it accessible to more users. You can use your own primary or guide camera, given they have wide-enough FOV for the astrometry solver.

Moreover, the assistant can automatically capture, solve, and even rotate the mount for you. All you have to do is to make the necessary adjustments to your mount.

The new PAA works by capturing and solving three images. It is technically possible to rely on two images only as described above, but three images improves the accuracy of the solution. After capturing each, the mount rotates by a fixed amount and another image is captured and solved.



Since the mount's true RA/DE are resolved by astrometry, we can construct a unique circle from the three centers found in the astrometry solutions. The circle's center is where the mount rotates about (RA Axis) and ideally this point should coincide with the celestial pole. However, if there is a mis-alignment, then Ekos draws a correction vector. This correction vector can be placed anywhere in the image. Next the user refreshes the camera feed and applies correction to the mount's Altitude and Azimuth knobs until the star is located in the designated cross-hair. It's that easy!

Ekos PAA is now in Beta and tests/feedback are highly appreciated.


Yay! Now we have a logo! What do you think about it? Atelier is making around six months of development, and now is time to give you some updates. AtCore is on it's way to becoming stable, and I'm working on Atelier interface, so we can connect to AtCore and do some magic to everything [...]


February 14, 2017

The city of Munich is currently considering a move away from Free Software back to Microsoft products. We consider this to be a mistake and urge the decision makers to reconsider.

For many years now the City of Munich has been using a mix of software by KDE, LibreOffice and Ubuntu, among others. Mayor Dieter Reiter (a self-proclaimed Microsoft-fan who helped Microsoft move offices to Munich) asked Accenture (a Microsoft partner) to produce a report about the situation of the City of Munich's IT infrastructure. That resulted in a 450-page document. This report is now being misused to push for a move away from Free Software. However the main issues listed in the report were identified to be organizational ones and not related to Free Software operating systems and applications.

The City of Munich is of course free to decide on their IT infrastructure. Nonetheless we believe the move away from Free Software would be a big mistake and feel compelled to speak up. Specifically the move away from Free Software will

  • not actually fix the issues identified in the report by Accenture
  • remove vendor-independence which was one of the core arguments for moving to Free Software in the first place
  • incur estimated costs of €90 Million to be paid by tax-payer money. Another €15 Million are expected to be spent on replacing or upgrading hardware that cannot cope with the requirements of Windows 10 but runs fine with Linux.

The City of Munich has always been a poster child of Free Software in public administrations. It is a showcase of what can be done with Free Software in this setting. The step back by the City of Munich from Free Software would therefore not just be a blow for this particular deployment but also have more far-reaching effects into other similar deployments.

That said, we take this opportunity to invite all other administrations to leverage the work done by the City of Munich over the last years and are willing to help resolve remaining issues in the City of Munich related to our software.

Lydia Pintscher
President, KDE e.V.

Please also read the statement by The Document Foundation.

Dot Categories:

As some of you might already know, I’ve been focusing lately on Flatpak and its integration into KDE. You can check my work on Flatpak KDE portals, which are being currently included in our KDE runtimes and repositories were migrated to KDE git so there has been made some progress since last time I talked about them. Recently I started looking into adding Flatpak support to KDE Discover, to have same support for Flatpak as Gnome has with gnome-software. From the begining it was a nightmare for me as I have never used any glib based library so that slowed me down little bit. I also went through gnome-software code to understand how flatpak integration is done there to get some inspiration. Things went well since then and I have already quite nice stuff to share with you. We currently support most common functionality, like listing available/installed flatpak applications in Discover with possibilities to install/remove/update and of course launch them. We also support flatpak bundles and flatpakref files already. If you don’t believe me then here are some screenshots:

This is quite exciting stuff for me. There is still plenty of things we need to solve and improve, as well as adding possibility to manage flatpak repositories which is quite important feature to have too. This all is already possible to try in master branch of discover, you just need to enable flatpak backend. We will keep intensively working on this and hopefully we will have fully functional flatpak support in Discover soon and ready for the next Plasma release. See you soon!!.

The Qt 5.5.1-2 release for VxWorks Real-Time Operating System (RTOS) release supports the new VxWorks 7 release SR 0480 (September 2016) on ARM-v7 with updates in the Qt Base, Qt Declarative and Qt Quick Controls modules. For full list of changes, please see the change log.

To learn more about Qt 5.5 for VxWorks, please check out:

Existing licenses of Qt for VxWorks can download the new release from their Qt Account web portal and run it on top of the SR 0480 (September 2016) release of VxWorks 7 RTOS. If you wish to conduct a technical evaluation, please contact us to request an evaluation.

If you are planning to visit Embedded World in Nürnberg 14th – 16th March 2017, please come to see Qt at Hall 4, Stand 4-258 and Wind River at Hall 4, Stand 158.

Qt will also be in Wind River’s booth at HIMSS 2017 Booth#7785-21 showcasing a Qt for Medical IoT Heart rate monitor using VxWorks7 and Helix Cloud on i.MX6 built with Qt. Learn more.

VxWorks real-time operating system has been building embedded devices and systems for more than 30 years and powers more than 2 billion devices. To learn more about VxWorks, please visit the VxWorks webpage or contact Wind River.

The post Qt 5.5.1-2 for Wind River® VxWorks® Real-Time Operating System Released appeared first on Qt Blog.


Older blog entries


Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.