My current Research & Development (R&D) activities focus on industrial strength software technologies for Assisted Independent Living (aka Independent Assisted Living, aka Ambient-Assisted Living), leading to the development of distributed systems composed of sensors, actuators, computers, and other ICT devices. These are being used for a wide range of applications, which span from enabling elderly and disabled people to enjoy living in their own homes to monitoring security and safety in controlled workplaces and environments.
As some of you may know, I am involved in Technabling Ltd, a spin-off company of the University that I co-founded and in which I currently work P/T as Research & Development Director. Technabling is a high-tech company that develops software-centric systems for assisted independent living; all student project proposals below are based on real-word projects devised as part of the company R&D strategy. Such company R&D projects are deemed suitable as individual student projects.
Technabling has been (and is) the only spin-off company within the School of Natural and Computing Science (and maybe within the whole University of Aberdeen) that routinely offers student projects and follow-up jobs at both UG and PG levels - and we are proud of that. At the end of the day, employment begins at home.
The unique relationship between Technabling Ltd and Computer Science students can be summarised in a few figures:
Eight students turned Technabling members of staff so far; five of them currently with us: others may give talks on the subject, we do the facts.
Back to projects, please note that the following conditions and restrictions apply to each and every student project either proposed in this page or more generally done in collaboration with Technabling:
People recognition from the top (part of the Smart House As Standard project): Recognising different individuals in a room using a wide-angle ceiling-mounted camera looking down.
Environment and human monitoring can be effectively done using reasonably priced, standard off-the shelf cameras in conjunction with smart image analysis software that can recognise different individuals in a room. There is substantial research and development going on at the moment on identifying people; however, most projects look at people from the front rather than from the top. The project challenge is that of creating unique video fingerprints (top videoprint) of people as seen moving around from a ceiling-mounted internet camera (provided by Technabling together with its software infrastructure) and use such videoprints to recognise them at a later time.
People recognition from the front (part of the Smart House As Standard project): Quick recognition of individuals knocking at a house door using a webcam.
Who is the guy knocking at your door? CCTV can provide you images of people at the (front) door, but image quality is often poor and impaired eyesight can prevent recognition by the house occupier anyway. There is substantial research and development going on at the moment on identifying people; however, this mostly focuses on facial recognition, which is difficult to achieve and can be easily fooled by glasses, wigs, hats, moustaches, etc. However, there is a much wider range of difficult-to-alter physical parameters that can be considered beyond the face in order to decide whether somebody knocking at a house door is a known person. The project challenge is that of creating unique video fingerprints (front videoprint) of people as seen approaching a (front) door and standing in front of it from a front-mounted wide-angle camera (provided by Technabling together with its software infrastructure) and use such videoprints to recognise them at a later time.
Sound recognition (part of the Smart House As Standard project):
It is definitely possible to identify known sounds - for instance, it is possible to identify the song a short sound sample belongs to. However, things belong more complicated if the sound sample does not exactly reproduce the stored sample. A more flexible approach to sound recognition is thus needed.
There are industrial algorithms for sound sample matching that are used to see whether the music somebody is publicly broadcasting belongs to any copyrighted song for which the broadcaster is supposed to pay royalties for airing it. However, what if we are not interested in recognising specific songs but rather categories of sounds and noises (such as TV, washing machines, engines, a fight between two people, distress) so that we can rapidly assess where it comes from and whether it implies something concerning that needs to be looked at immediately? Moreover, it is well known that sound recognition can fail if the (non-optimal) quality of the sound sensor is such that the collected samples are worse than/different from the original (e.g., by added noise or by the filtering off of some spectral sound component). The project challenge is that of creating unique audioprints that can be used to identify the noice/sound source captured by a standard computer microphone taking into account sample vs. stored audipprint differences due to distortions (noise, spectral filtering).
Text-to-sign Avatar (part of the Portable Sign Language Translator project):
Create an Avatar that visually renders written text using British Sign Language hand gestures.
Using governmental funding, we are developing the PSLT to allow people with hearing impediments to be able to communicate with the wider community. The PSLT is a 2-way system that (i) renders sign language as written text on a display and (ii) renders written text as sign language. Developing an Avatar that does the job is not simple, as an open source technology to program the Avatar needs to be identified and a method to translate a sentence into a sequence of BSL hand gestures needs to be devised. The project challenge is that of defining a flexible methodology and developing a technology that is 'open' to add to the existing vocabulary and possibly to consider non-English languages too.
Recognise facial expressions of sign language users that emphasise what they are saying (part of the Portable Sign Language Translator project):
We aim to develop a further PSLT functionality that specifically identifies facial expressions during hand signing, providing a range of specific indications that would support the sign-to-text rendering functionality that we have developed.
People with hearing impairments use sign language as opposed to voice. Each word in a sentence is rendered as a specific hand gesture; a visual pause in gesturing between a sentence and another is used as a full stop. Although not essential, facial expressions are often used by signers in order to emphasise what they say. Speakers do it too. Facial expressions can stress the truth of a statement, a doubt, underline how extreme an adjective is, etc. The project challenge is that of building on top of OpenCV functionalities to develop a real-time functionality that effectively identifies facial expressions that add further information to signed sentences.
Extending the PSLT to non-English languages:
The Portable Sign Language Translator currently recognises British Sign Language, translating signed sentences into English text.
However, the PSLT architecture allows ANY sign language to be translated into ANY written language. To enable that in practice, a template model must be built for each considered language and a corresponding natural language generator must be devised.
Templates and natural language generators exist for written English but they are in scarce supply for other languages. We are currently considering the following languages: Spanish, French, Russian, Chinese. The project challenge is that of reusing as much as possible current open source packages and applications to devise flexible components that address sentence structuring and generation in each of the above languages. every language would result in a different project in its own.
We have other projects too! If you have read so far, you may have realised that we are developing articulates software platforms and infrastructures that provide a wide range of functionalities (not exclusively) based on video and audio processing. The above projects are just part of the things we do and that can be potentially offered as individual student projects. If you are interested in what we do and in the kind of technologies we use and if you would explore the possibility of doing a project with us on a cognate topic not explicitly mentioned in this list, please let me know.
I am happy to discuss any kind of project that falls within my own research & development areas of interest, which currently match the Technabling areas of interest. However, I do not supervise self-proposed projects that do not fit with my current areas of interest and expertise.
Some projects are more methodological in nature, others are more implementation-oriented. However, to foster the professional development of students, we require each project to include the following three equally weighted components:
This requirement obviously excludes some potentially interesting projects that lack at least one of the listed components. Moreover, I do not propose projects whose preliminary risk analysis highlights major uncertainties that could affect both the quality and the size of the project outcome. The above project list is by no means an exhaustive one, and I do encourage students to proposed modified or additional project topics in my interest area other than those explicitly listed.