Summer Night Smart City anyone? Ethics, Psychology and Artificial Intelligence in future city planning

ABBA’s famous song Summer Night City was released 40 years ago and was created as a tribute to happy and inspirational Stockholm. In 2017, the Stockholm City Council adopted a strategy City Vision 2040, developed together with its citizens, for making Stockholm the smartest city in the world. Would it turn Stockholm into a Summer Night Smart City?

Good songs and lively cities make us feel joyful, as they create a warm and safe space that encourage connection and collaboration. In music, one can experiment with sound arrangement by blending natural and artificial sounds using different instruments. In city planning, it is about “space arrangement” as one needs to anticipate the future uses of physical space, taking into account changing economic, environmental, demographic, cultural or transportation needs of citizens. The word citizen can even be fetched as the zen of cityness, or an urban feeling of connectedness.

Lately, the topic of Artificial Intelligence (AI) makes headlines everywhere. Despite the hype, the notion of AI triggers feelings of ambivalence since we are fascinated by the future benefits AI could bring for humans and society, yet uneasy about potential challenges related to their supposedly unprecedented capabilities. In the area of future city planning and urban development, we need to safeguard the quality of human lives, including human rights, citizens’ safety and security, city’s attractiveness, fairness and sustainability. To this end, we need to consider psychological, societal and ethical questions alongside the technical issues associated with AI’s rapid development and utilization. What if the technical development accelerates faster than the moral and psychological understanding related to AI applications? Moreover, people are not pixels: recent urban psychology research is concerned with cities being seen “mechanistically, as inanimate clumps of buildings and technology, which misses their essential human nature”.

Human experience and behavior are at all times contextual. The local rationality principle posits that we make decisions based on what makes sense to us provided the goals, local conditions and group norms, or the beliefs about proper way of acting in different situations. We are part of the context that affects how we act. How to ensure we, as humans, can deal with unintended consequences as long as AI collects and connects contextual clues, makes decisions and performs a range of activities? How about “sensemaking” for robots? Attachment theory refers to the dynamics of relationships and bonding: concepts such as ‘place identity’ and ‘place attachment’ suggest that the place we live has profound impact on our sense of self, belonging, purpose and meaning in life. Understanding how people interact with the environment and infrastructure in a city shapes a meaningful design and city planning. The future urban landscape needs to accommodate diverse and multicultural needs. Social identity theory indicates that ethnocentrism results when people categorize themselves into emotionally significant groups. In organization science, this can be related to the notion of faultlines, introduced a decade ago by Lau and Murnighan (1998) as hypothetical divisions based on different attributes, which can potentially trigger “us-versus-them” relationship dynamics. A typical big city abounds with multitude of differences of views, cultures or religions. How AI can be used to “melt” the faultlines, mitigate inequalities and build trust and sustainability? How to create cities with a healthy heartbeat, that we all love to live in?

“AI is just an extension of our existing culture”

One of the great promises of AI is to eliminate human weaknesses, such as cognitive biases in decision-making. The general assumption is that AI is logical and objectively rational. However, a new study that used a psychological tool such as Implicit Association Test shows that AI can be biased since it learns from humans: it acquires cultural biases embedded in the patterns of wording and effectively adopts cultural stereotypes. “AI is just an extension of our existing culture”, says Joanna Bryson, one of the authors in the study, a computer scientist at the University of Bath in the UK and Princeton University. A recent MIT study also found gender and skin-type bias in commercial AI systems. How a machine will decide what to do when facing ethical dilemmas? There is a need to encourage an active and genuine dialogue between technology experts and social scientists on how intelligent machines are impacting society. Now is the time to consider the “design, ethical, and policy challenges that AI technologies raise”, says Barbara Grosz, Professors at Harvard John A. Paulson School of Engineering and Applied Sciences. Prof. Grosz is chairing the AI100, the One Hundred Year Study on Artificial Intelligence, aiming at anticipating how the effects of AI will flow into every aspect of our lives.

ABBA was an awesome and adorable song-writing and singing “hit machine” with a lasting effect on generations. These days, ABBA is again under the spotlight in Finland for a good reason: the musical Mamma mia! will debut in Helsinki in May 2018 for the first time in Finnish language. Thrilling songs sound in thriving cities.


nad

Nadezhda Gotcheva
Senior Scientist
nadezhda.gotcheva(a)vtt.fi

Open technologies will democratize AI

The ongoing digitalization and AI-driven change of the global economy, national economies, and corporations has started and seems to have no end in sight. This change represents societal disruption with many impacts. Continuous change, development and experimentation is the new normal. In order to stay competitive, organizations need continuous exploration of opportunities to exploit data and AI technologies to improve existing business processes and offerings, as well as to find new ones.

A recent PWC report estimates Artificial Intelligence (AI) could make $15.7T potential contribution to the global economy by 2030[i].  The same report identifies nearly 300 use cases for AI spanning business and society.  Finland’s goal to become a leader in applying AI represents an ongoing digitalization and societal change[ii].

AI and the information, communication and automation technologies used in its realization are developing at a breathtaking pace. Development is so fast that education systems face challenges to meet rapidly changing skills needs in the training of workforce to the labor market. Various on-line courses and mini-degrees have increased their popularity in response to rapid skill development needs[iii].

Availability of open AI technologies and related pool of experts has been growing steadily over the past few years.  In 2017, the GitHub community for open source software developers reached 24 million developers working across 25 million repositories of open source code[iv].  Open AI technologies have become a serious option for commercial AI technology offerings.

For example, Google has opened the source code of its machine learning platform behind its own production services, which has created a significant developer and user community around it. In 2017, TensorFlow and TensorFlow Models were two of the top ten most active code repositories on GitHub. Several other AI technologies have also become available with open source licenses.  Just under half of the 100 largest companies in the United States (by revenue) use GitHub Enterprise to build software.  Furthermore, to address the AI skills shortage, globally only 5 thousand teachers and 500 thousand students actively used GitHub in 2017.

Development of new services requires strong AI technology expertise

VTT and IBM Research – Almaden are in research exchange co-operation at Silicon Valley. The goal is to study architecture, ecosystem and future development of open AI technologies from the viewpoint of AI systems development and engineering. Preliminary results of the work are published on a joint blog (http://opentechai.blog) and the topic is discussed at the international OpenTech AI Workshop in Helsinki.

The advantages of open AI technologies include rapid pace of development. The research activities on the field of AI produce new algorithms and machine learning models. For reproducibility of results, these are often implemented and made available with open technologies first. In addition to open source code, lot is happening also around open datasets, machine learning models, benchmarking and leaderboards. The ecosystem around open AI technologies has emerged and is evolving rapidly. This evolution is not only worth following in the sidelines, but calls for active participation to research, development and exploitation of open AI technologies. Clarifying the role and importance of open AI technology for any organization is wise preparation for the future.

The evolution of open AI technologies is a development that has emerged during the past few years. This is continuation to the open source development in software products, which started already earlier. The open development on AI technologies is democratizing opportunities for exploitation of AI; It enables building needed skills, code sharing and exploitation independently of individual vendors in an open ecosystem. Also in the field of AI, value creation and commercial competition are shifting from software products to applications and related services. Here crucial is strong and versatile expertise on AI technologies and capability to apply new and rapidly evolving technology together with customers.


Daniel Pakkala
Principal Scientist, Data Driven Solutions, VTT

Jim Spohrer
Director, Cognitive Open Technologies, IBM

For more information:

http://opentechai.blog

https://developer.ibm.com/opentech/2018/01/29/helsinki-march-2018-opentech-ai-workshop/

[i]  PWC (2017) Artificial Intelligence Study. https://www.pwc.com/gx/en/issues/data-and-analytics/publications/artificial-intelligence-study.html
[ii] VTT (2018) Finland AI Strategy. http://www.vttresearch.com/Impulse/Pages/Finland-seeking-top-spot-in-application-of-artificial-intelligence-AI.aspx
[iii] For example, for a freely available, easy to access online set of courses see http://cognitiveclass.ai
[iv] GitHub (2017) State of the Octoverse. https://octoverse.github.com/

 

Avoimet teknologiat demokratisoivat tekoälyn

Meneillään oleva maailmantalouden, kansantalouksien ja yritysten digitalisaatio ja tekoälyn soveltaminen edustavat vaikutuksiltaan laajaa yhteiskunnallista murrosta, jonka loppua ei ole näkyvissä. Jatkuva muutos, kehittäminen ja kokeileminen ovat uusi normaali. Säilyäkseen kilpailukykyisinä organisaatioiden on jatkuvasti tarkasteltava datan ja tekoälyteknologioiden hyödyntämismahdollisuuksia olemassa olevien liiketoimintojen tehostamiseksi sekä uusien liiketoimintamahdollisuuksien löytämiseksi.

Tuore PWC:n raportti arvioi tekoälyn tuottavan n. 15,7 triljoonan dollarin potentiaalisen lisän maailmantalouteen vuoteen 2030 mennessä[i]. Raportissa tunnistetaan myös lähes 300 käyttötapausta tekoälyn soveltamiseen eri toimialoilla ja yhteiskunnassa. Suomen tavoite pyrkiä johtavaksi tekoälyn hyödyntäjämaaksi edustaa menossa olevaa digitalisaatiota ja yhteiskunnallista muutosta[ii].

Tekoäly ja sen toteuttamiseen liittyvät informaatio-, viestintä- ja automaatioteknologiat kehittyvät hengästyttävää tahtia. Kehitys on jopa niin nopeaa, että koulutusjärjestelmillä on haasteita vastata muuttuviin osaamistarpeisiin koulutettaessa uusia osaajia työmarkkinoille. Erilaiset kaikille avoimet verkkokurssit ja pienimuotoiset tutkinnot ovat kasvattaneet suosiotaan vastauksena osaamisen nopeaan kehitystarpeeseen[iii].

Avoimien tekoälyteknologioiden saatavuus ja osaajayhteisö ovat olleet vahvassa kasvussa muutaman viime vuoden ajan. Vuonna 2017 avoimen lähdekoodin kehittäjäyhteisö GitHub saavutti 24 miljoonan kehittäjän rajan. Kehittäjät työskentelivät 25 miljoonan avoimen lähdekoodin projektin parissa[iv]. Avoimista tekoälyteknologioista on tullut vakavasti otettava vaihtoehto kaupallisille tekoälytarjoamille.

Esimerkiksi Google on avannut omien tuotantopalveluidensa taustalla pyörivän koneoppimisalustansa lähdekoodin, joka on saanut merkittävän kehittäjä- ja käyttäjäyhteisön ympärilleen. Vuonna 2017, TensorFlow ja TensorFlow Models olivat kymmenen aktiivisimman koodivarannon  joukossa GitHub’issa. Myös useita muita tekoälyteknologioita on tullut saataville avoimen lähdekoodin lisensseillä. Hieman alle 100 liikevaihdoltaan suurinta yritystä USA:ssa käyttää GitHub’in yritysversiota ohjelmistokehityksessä. Tekoälyteknologioiden osaamisvajeeseen liittyen vuonna 2017 vain n. 5 000 opettajaa ja 500 000 oppilasta käyttivät aktiivisesti GitHub’ia maailmanlaajuisesti.

Uusien palvelujen kehitys edellyttää vahvaa tekoälyteknologioiden osaamista

Teknologian tutkimuskeskus VTT ja IBM Research – Almaden tekevät tutkimusvaihtoyhteistyötä Piilaaksossa avoimien tekoälyteknologioiden parissa.  Tavoitteena on tutkia avoimien tekoälyteknologioiden arkkitehtuuria ja ekosysteemiä sekä tulevaa kehitystä tekoälyjärjestelmien suunnittelun ja kehittämisen näkökulmasta. Työn alustavista tuloksista kerrotaan yhteisessä blogissa (http://opentechai.blog) ja aiheesta keskustellaan kansainvälisessä OpenTech AI -työpajassa Helsingissä.

Avoimien tekoälyteknologioiden etuna on erittäin nopea kehitystahti. Aihealueen tutkimus tuottaa prototyyppejä, uusia algoritmeja ja koneoppimismalleja. Tulosten toistettavuuden vuoksi nämä tulevat usein ensimmäisenä saataville ja sovellettaviksi avoimilla teknologioilla toteutettuina. Avoimen lähdekoodin lisäksi paljon tapahtuu myös avointen datasettien, koneoppimismallien sekä tekoälytoteutusten kyvykkyyksiä vertailevien haastekilpailujen saralla, jotka ovat oleellisia tekoälyn kehittämisen ja soveltamisen kannalta. Ekosysteemi avoimien tekoälyteknologioiden ympärillä on syntynyt ja kehittyy nopeasti. Tätä kehitystä ei kannata pelkästään seurata sivusta vaan osallistua avoimien tekoälyteknologioiden tutkimukseen, kehitykseen ja hyödyntämiseen. Avoimien tekoälyteknologioiden roolin ja merkityksen selkeyttäminen oman organisaation toiminnan kannalta on viisasta tulevaisuuteen varautumista.

Avointen tekoälyteknologioiden kehitys on muutaman viime vuoden aikana alkanut kehityskulku, jossa jatkuu jo aiemmin ohjelmistoissa alkanut avoimen lähdekoodin tuotteiden yleistyminen. Tämä kehityskulku demokratisoi tekoälyteknologioiden hyödyntämismahdollisuudet, sillä se mahdollistaa osaamisen kehittämisen, yhteiset koodivarannot ja tekoälyteknologian soveltamisen yksittäisistä toimittajista riippumattomasti avoimessa ekosysteemissä. Myös tekoälyn saralla arvonluonti ja kaupallinen kilpailu siirtyvät ohjelmistotuotteista kohti niiden avulla toteutettavia sovelluksia sekä niihin liittyviä palveluita. Tässä oleellista on vahva ja monipuolinen osaaminen tekoälyteknologioissa sekä kyky soveltaa uutta ja nopeasti kehittyvää teknologiaa asiakkaiden kanssa yhdessä.

Daniel Pakkala
Principal Scientist, Data Driven Solutions, VTT

Jim Spohrer
Director, Cognitive Open Technologies, IBM

http://opentechai.blog

https://developer.ibm.com/opentech/2018/01/29/helsinki-march-2018-opentech-ai-workshop/

[i]  PWC (2017) Artificial Intelligence Study. https://www.pwc.com/gx/en/issues/data-and-analytics/publications/artificial-intelligence-study.html
[ii] VTT (2018) Finland AI Strategy. http://www.vttresearch.com/Impulse/Pages/Finland-seeking-top-spot-in-application-of-artificial-intelligence-AI.aspx
[iii] For example, for a freely available, easy to access online set of courses see http://cognitiveclass.ai
[iv] GitHub (2017) State of the Octoverse. https://octoverse.github.com/

Will artificial intelligence remain under human control?

How can one communicate fluently with artificial intelligence? Can one cooperate with artificial intelligence?

The existing artificial intelligence (AI) systems based on machine learning are often independent actors that inform people about their conclusions, but otherwise interact with people in a very limited scale.  AI is being increasingly introduced not only in services accessible via the internet, but also in mobile machines, such as autonomous cars and robots. We should consider how to ensure that AI will always remain under human control, and how humans can and how they should be able to interact with AI.

Verbal and non-verbal communication

In trend analyses of technology, the interactive properties of AI have been identified as the next major step in their development. Dialogical interaction does not require the user to seek and learn commands, but the correct function is negotiated through free dialogue with the machine. Interaction can be supplemented by non-verbal communication in such a manner that the machine identifies and reacts to the person’s emotional state, such as the person being confused. A machine can learn to identify individuals and adjust its operations according to which matters the person is and is not familiar with, and how he or she prefers to operate. Personal virtual assistants, such as Apple’s Siri, strive to establish a relationship with their owner and learn their preferences in such a manner that, with time, they can predict the person’s needs and offer assistance even before the person takes the initiative to ask for it.

In the internet, nowadays you often encounter chatbots. They are already relatively clever, and, when dealing with them, you may not always notice at first that you are not encountered by a real human being. A chatbot’s ability to discuss is based on the fact that it knows very well the limited service area within which it operates. It has learned to predict what kind of questions people may have. Every now and then, a chatbot may feel a little bit rude. This probably derives from the fact that they are programmed by people who transfer their own manners to the robot.

Interest towards AI solutions where a human and AI operate in collaboration with each other is increasing.  Collaborative human power can be used, for example, for collecting data or interpreting images in solutions, where a large group of people and AI form a collectively functioning entity. This kind of collective  intelligence has been used for such purposes as digitalisation of old texts. A human eye is incomparable in recognising words, even when written in strange letters. When AI carries out easy text recognition tasks and lets people deal with any unclear cases, the work will advance quickly with such collective power.

Fluent interaction requires learning and participation

Fluent interaction between humans and AI still requires a lot of development in many areas. In the future, we will see increasing amounts of work teams consisting of humans and robots. A robot can assist humans in many kinds of maintenance and service tasks. Fluent interaction is based on AI, with the help of which the robot interprets its environment and humans. Recognising the intentions of one another plays a key role: a human must be able to anticipate the robot’s actions, and, in the same way, the robot must be able to anticipate human actions. Dialogical interaction solutions are needed in this field as well.

Autonomous cars and other vehicles largely function on their own, but when they encounter a problematic situation, they may easily need human assistance. In such a situation, it is good if the machine has kept the human up to date on what is going on, so that he or she may quickly resolve the problematic situation. Indicating and recognising intentions is important also with a view to bystanders: when pedestrians encounter an autonomous car, how can they be sure that the car has seen them and stops at a pedestrian crossing to give way for them? How do you establish an eye contact with an autonomous car?

Different smart services at home and in offices strive to fulfil people’s wishes and predict their desires. Often such services remain unnoticed by people, in which case it may remain unclear why air conditioning is blowing at full blast or why the temperature does not rise. An easy interaction channel is needed, so that people can find out why things are going the way they are going, and that they can influence matters.

AI is not infallible − it can make mistakes and it may have faults. Once humans learn to understand the limitations of AI, and the way AI draws conclusions and functions, the interaction between them will become easier. When people understand the basics of the way AI functions, they can put themselves on a level with it, in the same manner as people naturally tune into the same level with the person they are talking with.  It is important to develop AI solutions in such a manner that people who will work with AI are allowed to participate in the design of the solutions.

Read more: VTT and Smart City

Kaasinen Eija
Eija Kaasinen
Senior Scientist, VTT
@eijakaasinen 
eija.kaasinen(a)vtt.fi

 

 

How will we manage with artificial intelligence in the future?


What is machine learning? Why does artificial intelligence draw conclusions differently than humans do? How does artificial intelligence become superintelligence?

Early this year, I spent a night at a big hotel in Berlin. When I stepped into my room, it felt quite cool inside. There was a sticker by the door, telling that the hotel had introduced a ”Smart climate control” system and I could adjust the temperature to the desired level through my TV. I opened the TV and navigated to the climate control page through various turns. And there it was: the present temperature was 18 degrees and the target temperature set by the previous customer was 25. I set the target temperature to 22 degrees and went out to have dinner. When I returned to my room, the temperature had climbed to 19 degrees, probably due to my PC which I had left on in the room. It still felt quite cool, so I called the hotel reception for help. The help soon arrived. A janitor brought an old-style fan heater for my use. I could not keep the noisy fan on at night, so the temperature dropped back to around 18 degrees for the night. However, in the morning, I woke up well rested after a good night’s sleep. After all, you sleep better in a cool environment. This left me wondering that maybe the smart climate control was smart enough to understand better than I what was the ideal temperature for me. I would still have appreciated some kind of an explanation, because the “smart” system that does what it pleases without giving any say to a human left me feeling powerless. The hotel staff had also clearly resigned itself in front of the smart climate control and did not even try to fix the system in my room but resorted to using a good old fan heater. If the system really was smart, would it not also keep people up to date on the decisions it has made, telling what it is aiming at. If it does not function or cannot fulfil people’s wishes, would it not also give a reason for this?

From artificial intelligence to superintelligence

Artificial intelligence (AI) has been studied for decades, but now it is experiencing a strong renaissance. The earlier attempts to bring all expert knowledge on one subject into a single machine were defeated by their own impossibility. Today, the prevailing trend is the development of an AI based on machine learning, where the idea is that the machine learns little by little when being taught, but also on its own. Machine learning is well suited for the analysis of large masses of data and for supporting people in data-based decision-making. In medicine, for example, AI allows examination of different measurement data, and the machine can draw connections between data. Therefore, AI can be used for such a purpose as forecasting the development of a disease, when a patient’s data is compared to data on earlier patients. It is typical of machine learning that the result is not exact, but it is a probability-based forecast. That is why a machine cannot give similar detailed explanations for its conclusions as a human expert can.

A lot is expected of machine learning not only in medicine, but also in service business of companies, where AI can be used for analysing machine data collected from the field and forecasting, for example, occurrence of faults. In such applications, AI functions independently, analysing data and giving suggestions to people about the next necessary maintenance measures and even about their suitable timing, considering the financial factors.

In addition to these positive effects, futures researchers have also been painting some very gloomy scenarios about the “superintelligence” of the future that would be able to, for example, develop its own intelligence, draw its own conclusions and generate a will of its own, and could thus get out of the hands of both its designers and users.

What would be a potential path from the present machine learning-based AI systems to such superintelligence? AI is being introduced not only to services accessible via the internet, but also to mobile machines, such as autonomous cars and robots. Would this be the right time to consider making the future development paths such that the AI will remain under human control for sure?

A clever person solves the problems a wise person knows to avoid. This old wisdom should be applied to AI as well: if AI represents the cleverness and humans represent the wisdom, then humans must be secured a role in which they can prevent problems that AI might cause to itself or to humans. There must be an easy connection between AI and humans, and humans must have the final decision-making power. This prevents AI from getting out of human hands even as it learns new things.

In the next part of the blog series, I will focus more on the interaction between humans and AI.

Read more: VTT and Smart City

Kaasinen Eija
Eija Kaasinen
Senior Scientist, VTT
@eijakaasinen 
eija.kaasinen(a)vtt.fi

 

In the next part of the blog series, I will focus more on the interaction between humans and AI.

Business out of data in urban environments

The role of local authorities and cities is undergoing a transformation, and it is becoming more common to regard them as service platforms. One enabler of such development is a transfer from closed to open systems, but also new modes of operation, such as the city as a platform thinking included in the Smart Tampere ecosystem, contribute to this.

It is possible to collect a lot of electronic data on the behaviour and needs of municipal residents. Using artificial intelligence (AI) or augmented reality (AR) tools, such data can be utilised in decision-making and the development of new services. With the help of refined data, the future service needs of municipal residents can be predicted, and services can be personified according to different life situations. When someone is moving, AI can automatically recommend him or her the best residential area and suitable day care centres with openings, or suggest the most sensible jobs etc. in accordance with the user’s personal interests. Cities know their residents increasingly well, and the data offers huge opportunities for different stakeholders to provide new services.

However, enterprises have been slower to seize the opportunities offered by open data than expected. The user data is dispersed between various public and private digital sources, and the creation of major data-based business would require integration of data from several sources. In other words, ground rules and bold initiatives for sharing data are also needed between operators. The creation of new data-based business activities requires examining services from the viewpoint of municipal residents instead of using the data sources as the starting point for service development. Turku with its ‘circular economy of data’ project and Forum Virium Helsinki, with user-oriented open innovation as its mode of operation, are excellent examples of trendsetters.

Use of open data from various sources in applications and services

Open data can be used in different service contexts. Most examples of such applications can be found in financial and taxation services, such as Budjettipeli budget game, with the help of which you can test different models for sharing the financing burden of welfare services between public communities and private citizens. It is based on the data resources of Statistics Finland, the National Institute for Health and Welfare and the Finnish Centre for Pensions. A lot of examples can also be found in map applications, such as the online and mobile service Aaltopoiju, which offers boaters and free-time seafarers exact observation and forecast data on different weather phenomena, such as water level and wave height. Aaltopoiju uses the open data material produced by the meteorological institutes of Finland, Estonia, Sweden and Germany.

The success factors of a business process based on open data

With a view to making business, it is important that applications based on open data have easy-to-use user and customer interfaces. The integration of data and information systems plays a key role in how utilizable the data is. Technological solutions must support the usability of the application. In addition, securing the information security of individuals is a prerequisite for creating profitable business out of open data. When collecting and using personal data of municipal residents, the delicate nature of such data must be taken into account in every stage of the data process.

Below, as an example, we have listed data initiatives related to parking and traffic, including light traffic, that are being planned, in progress or in their final stages in various cities. In the services of Helsinki Region Transport (HRT), the current issues include starting the operation of Länsimetro and the relevant changes, whereas in Tampere the construction of a tramline reforms the transport structure in the Pirkanmaa area and business activities related to that. Identifying the critical missing pieces in services from the point of view of those moving in city areas can serve as a basis when planning new data initiatives. This enables more efficient creation of new, data-based business operations.

tampere_smartcity  

Customer-oriented and comprehensive service solutions

In urban environments, services utilising open data must be based on the customers’ needs, and not only on the needs of individual data-based services. There is already a lot of data available from various sources, but identifying the critical missing data and its open provision may create new value-creation opportunities. Accumulation of data in the various phases of the use of services and business process may create new opportunities, when we learn to refine them to usable form. Therefore, the roles required for the analysis and utilisation of data (e.g. technical implementation and final use) and the operators in a comprehensive ecosystem must be identified to enable value-creation for the final user. It is also important to collect feedback on the use of applications to develop the services.


Antti Ruuska
Business Development Manager, VTT
antti.ruuska(a)vtt.fi
Twitter: @antti_ruuska

Salla Paajanen
Research Scientist, VTT
salla.paajanen(a)vtt.fi
@PaajanenSalla

Katri Valkokari
Research Manager, VTT
katri.valkokari(a)vtt.fi
@valkatti

Antti Knuuti
Key Account Manager, VTT
antti.knuuti(a)vtt.fi

If you want to read more about VTT’s vision regarding smart and sustainable cities, read our new white paper: Let’s turn your Smart City vision into reality. Smart City development is inherently multi-technological and cross-disciplinary, and as an application-oriented research organisation VTT is an ideal partner. We work with the public sector and private companies as well as technology providers in research and innovation activities that expedites the development of smarter cities.  We can guide you from the early phases of vision-creation and concept development to practical implementations of smart outcomes.