Incom ist die Kommunikations-Plattform der Fachhochschule Potsdam

In seiner Funktionalität auf die Lehre in gestalterischen Studiengängen zugeschnitten... Schnittstelle für die moderne Lehre

Incom ist die Kommunikations-Plattform der Fachhochschule Potsdam mehr erfahren

Extended Realities - Data visualization in AR

Extended Realities - Data visualization in AR

In this course, I explored how could I integrate Unity and Houdini in order to achieve an Augmented Reality Data Exploration


This course was an introduction to designing applications for Augmented Reality and Virtual Reality with the Unity Game Engine. We learned how to create Virtual and Augmented Reality Applications through Unity.

Each one of us had to create a functional application in the media that we decided. In order to achieve that, we went through several small tasks to create augmented and virtual reality projects.


After going through different ideas I decided that I wanted to explore the world of data visualization. Especially the visualization of data in our reality. How can we defer from visualized data in two-dimension into our three-dimension world, can it be better? how could the perception change? or… how could it be to walk around complex data sets? 

My idea then was initially to „create an interactive visualization of a predeterminate data set and bring it into Mixed Reality. The data should have an interesting visualisation, as well as a, tell a story to the user.“

I had questions like: What are things that we have to look out for? What is possible to be done? How to bring a data set into context? How far is it technology?


The reason why I decided on this topic is due to the fact that some companies are not only researching what can be done with Mixed Reality and Augmented Reality but aswell already experimenting in the way of bringing data into our world through this technologies. Data visualization is already a complex topic for itself regarding if it´s visualized in 2D or 3D – always in the two dimensionalities of our screen.

As you can see in the pictures below, this are the few approaches that I´ve found so far through my research. In the picture from the iPhone from Sebastian Sadowski we see that the scatter plot is rather messy and everything is aglomerated in one point without it having a visual structure. In the second picture – a project from IBM – we see that instead of trying to use all the space of reality – like in Sadowski image –, they created some sort of floating interface that even though is projected into our real world there is no real interaction or interconnection with it.



Since I wanted to get into the topic and see what I could do myself, how difficult it is to bring a dataset into AR or MR and visualize it and place it in reality the way I wanted it, I first had to search a data set that I could visualize for this project. I decided to work with a dataset from the top 100 songs of Spotify of 2018 which I got from, but one can also download its own library through the Spotify developer program.

Here the link where the dataset can be found:

Screenshot 2019-07-03 at 22.13.24.pngScreenshot 2019-07-03 at 22.13.24.png

The data set is divided in the following categories: Danceability, Energy, Tempo (BMP), Key, Valence / Positiveness, Loudness (dB), Acousticness, Mode (Major or Minor), Speechiness, Instrumentalness, Liveness and Duration. 

Most values where given between 0 and 1. (The following description of the categories was copied from the dataset)

Danceability: Danceability describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity.  A value of 0.0 is least danceable and 1.0 is most danceable.

Energy: Energy is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale. Perceptual features contributing to this attribute include dynamic range, perceived loudness, timbre, onset rate, and general entropy.

Tempo (BMP): The overall estimated tempo of a track in beats per minute (BPM). In musical terminology, tempo is the speed or pace of a given piece and derives directly from the average beat duration.

Key: The key the track is in. Integers map to pitches using standard Pitch Class notation. E.g. 0 = C, 1 = C♯/D♭, 2 = D, and so on.

Valence / Positiveness: A measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry).

Loudness (dB): The overall loudness of a track in decibels (dB). Loudness values are averaged across the entire track and are useful for comparing relative loudness of tracks. Loudness is the quality of a sound that is the primary psychological correlate of physical strength (amplitude). Values typical range between -60 and 0 db.

Acousticness: A confidence measure from 0.0 to 1.0 of whether the track is acoustic. 1.0 represents high confidence the track is acoustic.

Mode (Major or Minor): Mode indicates the modality (major or minor) of a track, the type of scale from which its melodic content is derived. Major is represented by 1 and minor is 0.

Speechiness: Speechiness detects the presence of spoken words in a track. The more exclusively speech-like the recording (e.g. talk show, audio book, poetry), the closer to 1.0 the attribute value. Values above 0.66 describe tracks that are probably made entirely of spoken words. Values between 0.33 and 0.66 describe tracks that may contain both music and speech, either in sections or layered, including such cases as rap music. Values below 0.33 most likely represent music and other non-speech-like tracks.

Instrumentalness: Predicts whether a track contains no vocals. „Ooh“ and „aah“ sounds are treated as instrumental in this context. Rap or spoken word tracks are clearly „vocal“. The closer the instrumentalness value is to 1.0, the greater likelihood the track contains no vocal content. Values above 0.5 are intended to represent instrumental tracks, but confidence is higher as the value approaches 1.0.

Liveness: Detects the presence of an audience in the recording. Higher liveness values represent an increased probability that the track was performed live. A value above 0.8 provides strong likelihood that the track is live.

Duration: The duration of the track in milliseconds.


After knowing what was my goal and what data set I could use, I had to start looking into the technology and what which of does values I could visualize. 

With that in mind, I decided that I could visualize: Danceability, Energy, Tempo (BMP), Valence / Positiveness, Loudness (dB), Acousticness, Mode (Major or Minor) and Speechiness.
Leaving to the side: Instrumentalness – which sounded rather similar to acousticness and speechiness –, Liveness and Duration – which I left out since in the values weren´t to rich in their complexity –.

Having my values in mind, and since at the time I was learning how to use Houdini, I then soon realized that I could mix the two technologies and see how I could create generative data points through a pipeline that connects Houdini together with Unity. My work was then divided in creating objects that could represent the parameters that I choose through Houdini and bringing them to Unity to create an Augmented Reality Application. 

In few words, I wanted to:
Create real-time-generative-interactive data for augmented reality
with the help of Houdini and Unity.


As I mentioned before, during the duration of this course, I was also taking a course to learn how to work with Houdini. During one of the classes, it was taught to us how to upload csv data into Houdini and then divided the data of the csv into the different aches of the three-dimensional environment. 

I then, therefore, experimented with the different ways of creating an object as well started to see how to bring my data set into the software.

Screenshot 2019-06-13 at 16.33.21.pngScreenshot 2019-06-13 at 16.33.21.png
Screenshot 2019-05-28 at 13.07.05.pngScreenshot 2019-05-28 at 13.07.05.png

After having tried to have visualized data and changing the parameters in the aches of my Houdini I started to wonder how could I connect this two software. I did my research and I found a pipeline called Houdini Engine ( which should allow me to work in real-time. The pipeline work the following way:

- Create an object in Houdini
- Choose the parameters that you would like to access fast in the Digital Asset Menu 
- Export it to Unity through the pipeline as a Digital Asset


Before starting with my project, we all did the same task which inspired me for my main project. We created some boxes, uploaded sounds to them and then created a small application which we then upload into our mobile phones. The objects were anchored with a tracker which we could individually define and for the entire process we use Vuforia to allow the development of this Augmented Reality Application

I realized, this could be useful for my project, creating the data points and then having them being interactive: when a user touches the screen and therefore the data point the sound of the chosen song should play.

Screenshot 2019-08-23 at 20.18.18.pngScreenshot 2019-08-23 at 20.18.18.png

As I mentioned before, in my Houdini course, we learned how to import a csv document and visualize it but I was also curious to see if it was possible to do the same with Unity and till which extent would it be better, different or I could benefit from it.

For that, I followed this tutorial which explained how to create a Scatter Plot using a csv in Unity:

Screenshot 2019-07-05 at 01.15.42.pngScreenshot 2019-07-05 at 01.15.42.png

After seeing that it was rather more difficult to upload the csv into Unity and that changing the geometry points of the objects in Unity would require me to have a broad knowledge of coding I had to gave up on the idea and try to see what could it be possible with the plug-in for Unity and Houdini

Screenshot 2019-07-04 at 07.01.40.pngScreenshot 2019-07-04 at 07.01.40.png



Through my first round of work I ran into some difficulties:

-Compile Problems: I could not compile the pipeline from Unity into my mobile phone. The pipeline was intended to create multiple objects that could also be accessed through code, and the solution that was created to manipulate Houdini Engine is obsolete.

- Exporting individual objects in Houdini: Exporting normal objects as obj from Houdini was creating one object with 100 points… but I needed 100 objects so that I could give them each a song and each of them be interactive.

- Uncertain about the usage of my software: I was not sure how to divide the different parameters of the data set into the aches of the space but as well modify my objects so they would look differently accordingly to the given parameters.

- AR ToolKit unavailable: By the time, I wanted to create a Mixed Reality project. Unfortunately, the kit to create a MR App for Apple wasn´t out then since they were launching iOS13 in September 2019 and the older version of AR ToolKit spot working.

- Technology is not that far yet:

I realized that my wish to create real-time-generative-interactive data in augmented reality, was rather difficult for various reasons:    

- missing pipeline for generative design in real-time. Modifying objects in Augmented Reality

- rendering in real-time is already difficult in game-engines for computers so even more for AR or MR.   

- I counted with different, in total 100 points, which should be individually designed, places in space and be interactive. 

I had to find an easier solution.

Screenshot 2019-07-04 at 07.02.04.pngScreenshot 2019-07-04 at 07.02.04.png
Screenshot 2019-07-04 at 06.36.50.pngScreenshot 2019-07-04 at 06.36.50.png


With all the learned so far, the problems that I encountered on the way I had to come to a point and deliver my final project. I then decided to experiment with the parameters and their visualization in the space and the object as well as concentrate in keep working with Vuforia together with the tracker and making at least 50 interactive data points in Unity for the user. 

Here are my solutions to each of the problems and how I solved and created my final visualization in both software to end with a prototype of interactive data visualization of the top 100 songs listen in Spotify in 2018.


As mentioned before, my goal was to divided the parameters of Danceability, Energy, Tempo (BMP), Valence / Positiveness, Loudness (dB), Acousticness, Mode (Major or Minor) and Speechiness not only in the space but as well in the visualization of the objects.

I decided that I would divide Danceability, Energy and Tempo in the three-dimensional space and Valence, Loudness, Acousticness and Mode in the aches of each object, to finally have Speechiness being visualized as a deformation factor, meaning the more words a song would have, the more deformed this object would be. I will get further in the description of each value through this point.

This desition was taken after numerous experiments.

To achieve my goal, I can break down the process in the following points:

1. I first uploaded the csv through a node called „tableimport“. Houdini has a modular system, which makes it quite easy to go back and forth to modify any aspect of our object.

Screenshot 2019-08-25 at 19.30.31.pngScreenshot 2019-08-25 at 19.30.31.png

2. After having the table imported, and defining which type of value each column of the csv has, I proceeded to separate the values that I wanted in x, y and z in Danceability, Energy and Tempo. 

To do this, I used VEX Expressions - the internal language of Houdini which allows interacting with the software through code. (One can use python as well). To divided the values, I used a fit function which would tell the program: „Please use the minimum value and place it in the maximum point and please use the maximum value and place it in the maximum point.“

For example in the aches X I positioned „danceability“ and my minimum value is 0.1 which goes in X 0 then my maximum value is 1 and is positioned in X 1000. Everything in between goes in the middle. Having 0 and 1 doesn´t mean that danceability was measured in 0 and 1, it could have been measured with other numbers, it just tells to the program to divided its minimum and maximum value in the given distance. 

By dividng my parameters this way into X,Y and Z meant that: 

- The more danceable a song would be, the farther away of the 0 point it would be.

- The more energetic a song would be, the more on top it would be. The farther away of 0 it would be. 

- Regarding the tempo, the faster a song it would be, the farther away of 0 Z it would be, at the slower the song would be, the closer it would be of 0 Z.

Screenshot 2019-08-26 at 07.37.46.pngScreenshot 2019-08-26 at 07.37.46.png

3. After having my space divided into the parameters I had to visualize the data points and therefore the songs. This was done by adding a scatter (plane) node and dividing it by the number of songs, then coping each divided point into the form that it should have, in this case, a sphere.

Screenshot 2019-08-25 at 21.54.55.pngScreenshot 2019-08-25 at 21.54.55.png
Screenshot 2019-08-25 at 21.45.13.pngScreenshot 2019-08-25 at 21.45.13.png

4. Following having the 100 songs positioned in space. I created the same fit VEX Expression that I used to divide the parameters of the songs in the space to divide other parameters into the aches of each object. How Positive or Negative they are (Valence), how Loud, and how Acoustic they are.

This was done the following way:

The figures itself have also different parameters that change their length (x), height (y) and with (z):

x = Valence / Positiveness. The more positive (happy, cheerful, etc) a song is,  the more stretched it will be in x-direction.

y = Loudness. Measured in dB, the louder a song is the more stretched it will be 
 in y-direction.

z = Acousticness. If a song is predominately electronic it will be very wide in z-direction. 
 While if they are predominantly acoustic it will be thin.

Screenshot 2019-08-25 at 22.39.21.pngScreenshot 2019-08-25 at 22.39.21.png
Screenshot 2019-08-25 at 21.50.37.pngScreenshot 2019-08-25 at 21.50.37.png

5. With this done I had nearly all of the parameters that I chose given to aches in space or into the objects itself but now they where two left: Mode (if a song was written in Major or Minor) and Speechiness: 

Speechiness was visualized through noise with the creation of a Vector OPerator where the parameter was defined. The amount of text/ speechiness a song has the more texture the object will have. The more text = More spikes

Mode was visualized through the size of each object which would depend if the mode that they were written (Major or Minor). Major = Big. Minor = Small

The size of the object was done the same way in which I divided the rest of the other parameters, through a fit function in the node that combines the scatter with the sphere, in its scaling parameter.

Screenshot 2019-08-25 at 22.39.46.pngScreenshot 2019-08-25 at 22.39.46.png
Screenshot 2019-08-26 at 07.46.36.pngScreenshot 2019-08-26 at 07.46.36.png

6. Finally – as I mentioned in my problems –, I was not sure how to export each object to have a total of 100 objects representative of each one of the songs. 

The process was the following:

- First, reduce the polygons so that the objs that I would get at the end wouldn´t be too rich in polygons and therefor to heavy for Unity. 

- Second, I created a delete node in which I specified to delete all of the ones that were not selected and through the time frame export them through a file node as objs. Meaning each frame had one object.

Screenshot 2019-08-25 at 22.51.44.pngScreenshot 2019-08-25 at 22.51.44.png

7. Finally, I decided that creating an animation would be representative to show how all these parameters changed. For that I also animated a gradient because color carries a lot of information as well – unfortunately, as I experienced in my experiments, I could not export objs the way I wanted in Houdini with color, meaning that all the work done with color could only be represented in video or an obj that contained all the 100 songs as one object. 

For the color, I created another VOP in which I defined which parameter I would like to show and then created a ramp node in the geometry node its color. Then I decided which color would be 0, in my case blue and 1 will be yellow. Values in between will graduate from purple, pink and green.

Screenshot 2019-08-26 at 07.53.10.pngScreenshot 2019-08-26 at 07.53.10.png
Screenshot 2019-08-26 at 07.56.25.pngScreenshot 2019-08-26 at 07.56.25.png

To create the animation, I used the blend node:

- To animate the movement of x, y and z, I copied the three times the node in which I had the VEX Expressions and defined the maximum position to 0 or 1000 respectively in each to only have on of my values being at its maximum. Then through the blend node, I created 3 blending spaces where I switched between 0 and 1 and animated the keyframes in the animation tool.

Screenshot 2019-08-26 at 08.04.44.pngScreenshot 2019-08-26 at 08.04.44.png
Screenshot 2019-08-26 at 08.01.35.pngScreenshot 2019-08-26 at 08.01.35.png

- The same was done for the color, the difference was, that instead of changing the VEX Expression node, I changed the name of the ramp node giving it a different parameter being danceability, energy, tempo, etc. Then, as I did before, I animated each blending with keyframes.

Screenshot 2019-07-02 at 11.19.11.pngScreenshot 2019-07-02 at 11.19.11.png
Screenshot 2019-08-26 at 08.08.24.pngScreenshot 2019-08-26 at 08.08.24.png

This is how my entire visualization node system looked at the end:

Screenshot 2019-08-26 at 07.49.37.pngScreenshot 2019-08-26 at 07.49.37.png

8. Besides that, I also visualized the name of each song and artist in the space. I taught that this might be useful for the exploration of the data in Augmented Reality. 

For the creation of the words as polygons, i used nearly the same node composition but replaced the sphere with a text node. Then I exported everything as one obj to have in Unity.

Screenshot 2019-08-26 at 22.13.08.pngScreenshot 2019-08-26 at 22.13.08.png



After having all my objects exported (data spheres, song and artist names) and the animation being done, came the last – still not easiest – step of the process: bringing everything together to Unity and making interactive as an augmented reality App.

As I mentioned before, I could not use the new iOS meaning I had to keep working with Vuforia and its tracking system. I had solved the problem of the individual objects, which to my luck, also had saved their coordinates in space. Meaning in each came from Houdini knowing how danceable, energetic, positive or negative, loud it is, and which level of speechiness, its tempo and mode it has.

However, even though I already had a big part of the work done I still had to write the code, add the songs to each object and create an interface that would explain to the user what were they looking at. 

The process to bring my Spotify dataset from Houdini to Unity can be divided like this:

1. For simplicity I will explain first the code, however writing the code was part of each step, since it required retouching, adding and mixing information.

My code was a modification of the previous code that we use for the task that we did with the sound cubes. 

The code was divided in: 

A. When object selected play song

B. When object selected show name of the song

C. Show and hide the interface on click

I´ll show you below the code and briefly explain you what each part contains.

At the beginning of my code I called the class MonoBehaviour from which every script from Unity derives and I have the name of script SongDisplay. 

After that, I created a series of variables that will allow me to:

- show the name of the song 

- have a background in which the song will appear 

- create the container of the audio clips I will later upload

- and call a class which will allow me to play the music in the 3D space that I created.

**The great thing about Unity is that it counts with a very big library that allows developers to write code easier since there are already many prepared scripts that can be reused. Here the link:

***In my script for example AudioClip and AudioSorce come from this library

Screenshot 2019-08-27 at 07.23.20.pngScreenshot 2019-08-27 at 07.23.20.png

After the creation of my variables comes the part of the code that gets created/called every time it gets a new input or start.

This part of the code specifies that if it find that there is an input touching the screen then create a ray (matimatical line) which should hit the position that is being touched. If this line touches an object then play the song and set active the background where the name of the artist and song will be shown. If the line touches somewhere else then don´t play song and don´t show any sort of text or background.

Screenshot 2019-08-27 at 07.52.54.pngScreenshot 2019-08-27 at 07.52.54.png

At the end of the code, I created another function to simplify and keep my code clean. This function would be the one that will connect the other two function that I explained before and add a new factor: Identify the object in the 3D space which I previously called by the name of the artist and song – below you see it as „Tyga-Taste“ – and play the song that was store in the container that I created in Unity and show the name of the object on top of the background that I called in the code before.

Screenshot 2019-08-27 at 07.53.20.pngScreenshot 2019-08-27 at 07.53.20.png

Finally, it comes a separated code which its function was to turn on and turn off the interface that will contain the explanation of the project for the user. With this code the user can control through a button if they wan to see or hide the information.

Screenshot 2019-08-26 at 22.40.17.pngScreenshot 2019-08-26 at 22.40.17.png

2. After having my code written, I created a tracker in which all the data would be stored so that when having the application in my phone and showing to the camera the tracker it would be identified and run.

The tracker was done in the following website:


3. Following that I then exported all the objs from Houdini into Unity and scaled them down into the size of the tracker which could also be defined as the plane in which the data set would be shown. 

Each object needed a collider for the ray of the code to „hit“ it. Unfortunately when selecting all the objects an adding the box collider for some reason the collider would not appear around the object but somewhere else. This means that I had to do it manually for each one of the objects. Resizing, scaling and positioning the collider. 

I also named each object individually with the name of the song and artist. This naming should also be the same one that should be introduce in the function that I created to identify the object (void playSound).

This was the reason why I decided to have a third of the objects visible instead of 100 since I would have loose to much time in doing everything manually and the hole point was to have the data in AR being interactive to an extent.

Screenshot 2019-06-26 at 19.06.45.pngScreenshot 2019-06-26 at 19.06.45.png
Screenshot 2019-06-27 at 12.39.10.pngScreenshot 2019-06-27 at 12.39.10.png

4. The main script was not targeted to the tracker that contained the data but to a null object with the name of Game Controller. This null object or empty, contained the code that controlled all the application. 

This is where the function that contained the name of the songs and its number was shown for me to introduce the song.

Again since I did not know a more efficient way to write the code I had to download manually each song, cut it to a length – which I taught it wouldn´t give trouble when compiling or playing the application in the phone – and adding manually each song into the container that I created at the beginning of the code.

Screenshot 2019-06-24 at 11.23.39.pngScreenshot 2019-06-24 at 11.23.39.png

5. This far I have my objects in space and the song that are played when the data point is selected. In point one I explained different parts of the code, one of them „when object selected please show name of the song“. My initial wish was to have all the song and artists names in the space – that's the reason why I created the text objects in Houdini –, but then I soon realised that they would probably create more noise instead of providing information. Being all the time around the space. 

**Fun fact: Even if I did not use the obj of the songs to have it around the space it did help to identify and name each data object in the space. 

I, unfortunately, did not find a way to have the UI in the 3D space on top of each object since it represented a problem: the UI should always know where the camera is and rotate with it. 

I didn´t know how to code this so I created the solution of having the space with is aches which would help to better understand the space in which the information was contained and have on its cusp the UI showing the name of the song and artists.

** Before this final visualizations, I created other versions with color and also tested them through my phone. Since color didn´t really add another layer of information I decided to keep everything under the corporate green of Spotify.

Screenshot 2019-06-26 at 19.07.06.pngScreenshot 2019-06-26 at 19.07.06.png
Screenshot 2019-06-26 at 19.15.25.pngScreenshot 2019-06-26 at 19.15.25.png

6. With the visualization of my data being done came the last point: Having the Information being shown.

For this, the code of the UI was not part of the Game Controller but from the ARCamera which is the one that is in charge of „switching on“ and „off“ what can and can not be seen.

I created a button which was the trigger for showing the information, and created a canvas in which I added the text.

Screenshot 2019-06-27 at 16.22.01.pngScreenshot 2019-06-27 at 16.22.01.png

7. Finally I exported everything. Uploaded to my phone through xCode and with this, my prototype for an interactive data visualization of the best songs of Spotify was done.

Screenshot 2019-07-04 at 23.56.05.pngScreenshot 2019-07-04 at 23.56.05.png
Screenshot 2019-07-04 at 23.56.21.pngScreenshot 2019-07-04 at 23.56.21.png



People in 2018 liked to hear to rap or pop songs created by Drake, Post Malone or XXXTENTACION. Their tempo varies and they are similar loud.The majority of the songs are non-acoustic, written in major, don´t have much text, are danceable and rather negative.

** Underneath the pictures of the most heard artist and visualized through color the different parameters in the data. Blue is low. Yellow is high. 


Screenshot 2019-08-25 at 23.02.09.pngScreenshot 2019-08-25 at 23.02.09.png


Overall combining the course of Extended Realities together with Datascapes with Houdini was super interesting, very insightful and I did learn a lot through the way. I am very thankful to both teacher, Tank Thunderbird and Julian Braun for allowing me this opportunity. 

Trough the process of the creation of this prototype, there were a lot of things that they were out of my control and I learned to not stress myself, given that everything that has to do with software, plug-in and pipelines depends of external circumstances that one can not control.

Houdini from its part is not an easy software to use. In perspective, I wish I would had worked more the visual part and gone even crazier with my visualization. Unity from its side is dependant on other plug-ins, pip-lines and actualizations that made in my case something not possible, but overall Unity was easier to use than Houdini and I did achieve creating something interactive.

Regarding the visualizatio of csv through bouth software, it is indeed easier to do it through Houdini as a designer but it might be easier in the future to see how to modify the polygons of the objects with the data through code in Unity.

I am thankful in each part of my process I constantly tested. This allowed me to resolve problems faster and really see how everything was looking and feeling. 

With this said, I believe these technologies and this topic of data visualization through this media has potential and I wish to keep working on it. Keep creating more crazy visualizations and pushing the boundaries of the integration of generative-interactive design in Extended Reality technologies.



Art des Projekts

Studienarbeit im zweiten Studienabschnitt


foto: Tank Thunderbird

Zugehöriger Workspace

Extended Realities


Sommersemester 2019