Source Themes

Modelling Activities of Daily Living with Petri nets

Modelling Activities of Daily Living (ADLs) is an important step in the process to design and implement reliable sensor systems that effectively monitor the activities of the ageing population. Once modelled, unusual activities may be detected that have the potential of impacting upon a person's well-being. The use of Petri nets to model ADLs is considered in this research as a means to capture the intricate behaviours of ambient systems. To our best knowledge there has not been extensive work in the related literature, hence the novelty of this work. The ADLs considered in the developed Petri net model are":" (i) preparing tea, (ii) preparing coffee, and (iii) preparing pasta. The first two ADLs listed are deemed to have many occurrences during a typical day of an elderly person. The third activity is representative of activities that involve cooking. Hence, abnormal behaviour detected in the context of these activities can be an indicator of a progressive health problem or the occurrence of a hazardous incident. The completion and non- completion of activities are considered in the developed Petri net model and are also formally verified. The description of the sensor system of the kitchen ADLs, its Petri net model and verification results are presented. Results show that the Petri net modelling of ADLs can reliably and effectively reflect the real behaviour of the examined system detecting all the activities of the users that can exhibit both their normal and abnormal behaviour.

The development and formative evaluation of the `Worktivity' app":" a behaviour change theory-based mobile app to reduce occupational sedentary behaviour

CLIEL":" Context-Based Information Extraction from Commercial Law Documents

The effectiveness of document Information Extraction (IE) is greatly affected by the structure and layout of the documents being considered. In the case of legal documents relating to commercial law, an additional challenge is the many different and varied formats, structures and layouts used. In this paper, we present work on a flexible and scalable IE environment, the CLIEL (Commercial Law Information Extraction based on Layout) environment, for application to commercial law documentation that allows layout rules to be derived and then utilised to support IE. The proposed CLIEL environment operates using NLP (Natural Language Processing) techniques, JAPE (Java Annotation Patterns Engine) rules and some GATE (General Architecture for Text Engineering) modules. The system is fully described and evaluated using a commercial law document corpus. The results demonstrate that considering the layout is beneficial for extracting data point instances from legal document collections.

Measuring the impact of cognitive distractions on driving performance using time series analysis

Using current sensing technology, a wealth of data on driving sessions is potentially available through a combination of vehicle sensors and drivers' physiology sensors (heart rate, breathing rate, skin temperature, etc.). Our hypothesis is that it should be possible to exploit the combination of time series produced by such multiple sensors during a driving session, in order to (i) learn models of normal driving behaviour, and (ii) use such models to detect important and potentially dangerous deviations from the norm in real-time, and thus enable the generation of appropriate alerts. Crucially, we believe that such models and interventions should and can be personalised and tailor-made for each individual driver. As an initial step towards this goal, in this paper we present techniques for assessing the impact of cognitive distraction on drivers, based on simple time series analysis. We have tested our method on a rich dataset of driving sessions, carried out in a professional simulator, involving a panel of volunteer drivers. Each session included a different type of cognitive distraction, and resulted in multiple time series from a variety of on-board sensors as well as sensors worn by the driver. Crucially, each driver also recorded an initial session with no distractions. In our model, such initial session provides the baseline times series that make it possible to quantitatively assess driver performance under distraction conditions.

Questionnaire Free Text Summarisation Using Hierarchical Classification

This paper presents an investigation into the summarisation of the free text element of questionnaire data using hierarchical text classification. The process makes the assumption that text summarisation can be achieved using a classification approach whereby several class labels can be associated with documents which then constitute the summarisation. A hierarchical classification approach is suggested which offers the advantage that different levels of classification can be used and the summarisation customised according to which branch of the tree the current document is located. The approach is evaluated using free text from questionnaires used in the SAVSNET (Small Animal Veterinary Surveillance Network) project. The results demonstrate the viability of using hierarchical classification to generate free text summaries.

A Semi-Automated Approach to Building Text Summarisation Classifiers

An investigation into the extraction of useful information from the free text element of questionnaires, using a semi-automated summarisation extraction technique to generate text summarisation classifiers, is described. A realisation of the proposed technique, SARSET (Semi-Automated Rule Summarisation Extraction Tool), is presented and evaluated using real questionnaire data. The results of this approach are compared against the results obtained using two alternative techniques to build text summarisation classifiers. The first of these uses standard rule-based classifier generators, and the second is founded on the concept of building classifiers using secondary data. The results demonstrate that the proposed semi-automated approach outperforms the other two approaches considered.

A Semi-Automated Approach to Building Text Summarisation Classifiers

An investigation into the extraction of useful information from the free text element of questionnaires, using a semi-automated summarisation extraction technique to generate text summarisation classifiers, is described. A realisation of the proposed technique, SARSET (Semi-Automated Rule Summarisation Extraction Tool), is presented and evaluated using real questionnaire data. The results of this approach are compared against the results obtained using two alternative techniques to build text summarisation classifiers. The first of these uses standard rule-based classifier generators, and the second is founded on the concept of building classifiers using secondary data. The results demonstrate that the proposed semi-automated approach outperforms the other two approaches considered.

Using Negation and Phrases in Inducing Rules for Text Classification

An investigation into the use of negation in Inductive Rule Learning (IRL) for text classification is described. The use of negated features in the IRL process has been shown to improve effectiveness of classification. However, although in the case of small datasets it is perfectly feasible to include the potential negation of all possible features as part of the feature space, this is not possible for datasets that include large numbers of features such as those used in text mining applications. Instead a process whereby features to be negated can be identified dynamically is required. Such a process is described in the paper and compared with established techniques (JRip, NaiveBayes, Sequential Minimal Optimization (SMO), OlexGreedy). The work is also directed at an approach to text classification based on a “bag of phrases” representation; the motivation here being that a phrase contains semantic information that is not present in single keyword. In addition, a given text corpus typically contains many more key-phrase features than keyword features, therefore, providing more potential features to be negated.

An Investigation Concerning the Generation of Text Summarisation Classifiers using Secondary Data

An investigation into the potential effectiveness of generating text classifiers from secondary data for the purpose of text summarisation is described. The application scenario assumes a questionnaire corpus where we wish to provide a summary regarding the nature of the free text element of such questionnaires, but no suitable training data is available. The advocated approach is to build the desired text summarisation classifiers using secondary data and then apply these classifiers, for the purpose of text summarisation, to the primary data. We refer to this approach using the acronym CGUSD (Classifier Generation Using Secondary Data). The approach is evaluated using real questionnaire data obtained as part of the SAVSNET (Small Animal Veterinary Surveillance Network) project.

PHP14 Polypharmacy in Elderly Patients at the Mexican Institute of Social Security Satisfaction and Costs

OBJECTIVES To identify cases of polypharmacy (PF) and to describe their social and clinical characteristics, satisfaction and costs in elderly patients who attended Family Medicine healthcare services at the Mexican Institute of Social Security (IMSS). METHODS Cross sectional study in 260 elders (°Y ́65 years old) who attended a Family Medicine facility at the IMSS in Mexico City. A survey and a concurrent review of medical records were performed to identify characteristics of drug prescription and patients’ satisfaction in the previous 3 months. The WHO definition of polypharmacy was used to classify this prescribing pattern simultaneously consumption of more than 3 drugs. Costs were estimated from an institutional perspective and are expressed in US dollars (USD). RESULTS Mean age was 71 years (6.9 SD), 60.8% were female, 15.8% illiterate, 53.5% married, 10.4% single and 35.4% widow/widower. A high percentage (86.2%) reported having a chronic disease; the main problems were hypertension (57.7%), diabetes (35.4%), and sleep problems (35.4%). Satisfaction with medication was very high 56.9%, high 28.5%, mild 8.1%, low 1.2%, and very low 0.8%. Drug mean cost per patient was 6.6 USD (per month) with a maximum of 61.8 USD. Prescription of 3 drugs at the same time was reported in 64.2% and polypharmacy in 49.2% CONCLUSION Our study found that polypharmacy was a common prescribing pattern in Family Medicine services. Prescription of 3 drugs at the same time and polypharmacy might lead to an important proportion of health care costs. Among the elderly population the proportion of chronic conditions was high, as was satisfaction with drug treatment. It is possible that there is a trade-off between improvement of symptoms and adverse side effects of drugs; therefore it would be necessary to research the quality of life, drug prescription and its justification in these patients.