Presentation time:
HASCA oral presentation, 20 min (approx. 15-min talk + 5-min Q&A)

09:00-09:10 Opening (Chair: Kazuya Murao)
09:10-10:30 Session 1 [20min x4] (Chair: Kazuya Murao)
  • Towards LLMs for Sensor Data: Multi-Task Self-Supervised Learning
    Tsuyoshi Okita(Kyushu Institute of Technology), Kosuke Ukita(Kyushu Institute of Technology), Koki Matsuishi(Kyushu Institute of Technology), Masaharu Kagiyama(Kyushu Institute of Technology), Kodai Hirata(Kyushu Institute of Technology), Asahi Miyazaki(Kyushu Institute of Technology)
  • Predicting and Analyzing Emotion of Elderly People in Care Facilities
    Xinyi Min(Kyushu Institute of Technology), Haru Kaneko(Kyushu Institute of Technology), Sozo Inoue(Kyushu Institute of Technology
  • Personalized federated human activity recognition through semi-supervised learning and enhanced representation
    Lulu Gao(Kyushu University), Shin'ichi Konomi(Kyushu University)
  • Investigating the Effect of Orientation Variability in Deep Learning-based Human Activity Recognition
    Azhar Ali Khaked(Concordia University), Nobuyuki Oishi(University of Sussex), Daniel Roggen(University of Sussex), Paula Lago(Concordia University)
10:30-11:00 Coffee Break
11:00-12:20 Session 2 [20min x4] (Chair: Paula Lago)
  • Cardiac massage practice application using barometer in a smart phone and sealed bag
    Soto Mizukusa(Aichi Institute of Technology), Katsuhiko Kaji(Aichi Institute of Technology)
  • Eye movement differences in Japanese text reading between cognitively healthy older and younger adults
    Jumpei Kobayashi(Dai Nippon Printing Co., Ltd.), Hiroyuki Suzuki(Tokyo Metropolitan Institute for Geriatrics and Gerontology), Kenichiro Sato(Tokyo Metropolitan Institute for Geriatrics and Gerontology), Susumu Ogawa(Tokyo Metropolitan Institute for Geriatrics and Gerontology), Hiroko Matsunaga(Tokyo Metropolitan Institute for Geriatrics and Gerontology), Toshio Kawashima( Future University Hakodate)
  • A Data-Driven Study on the Hawthorne Effect in Sensor-Based Human Activity Recognition
    Alexander Hoelzemann(University of Siegen), Marius Bock(University of Siegen), Ericka Andrea Valladares Bastias(University of Siegen), Salma El Ouazzani Touhami(University of Siegen), Kenza Nassiri(University of Siegen), Kristof Van Laerhoven(University of Siegen)
  • Eco-Friendly Sensing for Human Activity Recognition
    Kaede Shintani(Osaka University), Hamada Rizk(Osaka University), Hirozumi Yamaguchi(Osaka University)
12:30-14:00 Lunch Break
14:00-15:30 Session 3 [SHL session]
  • SHL intro [4 min]
  • SHL summary paper [15 min]
  • SHL top 3 papers [36 min]
  • SHL award ceremony [5 min]
  • SHL posters session[18 min]
15:30-16:00 Coffee Break with SHL poster (cont'd)
16:00-17:00 Session 4 [20min x3] (Chair: Yu Enokibori)
  • Where Are the Best Positions of IMU Sensors for HAR? - Approach by a Garment Device with Fine-Grained Grid IMUs -
    Akihisa Tsukamoto(Nagoya University), Naoto Yoshida(Kogakuin University), Tomoko Yonezawa(Kansai University), Kenji Mase(Nagoya University), Yu Enokibori(Nagoya University)
  • Toward Pioneering Sensors and Features Using Large Language Models in Human Activity Recognition
    Haru Kaneko(Kyushu Institute of Technology), Sozo Inoue(Kyushu Institute of Technology)
  • Human activity recognition for packing processes using CNN-biLSTM
    Alberto Angulo(Sonora Institute of Technology), Jessica Beltran(Universidad Autonoma de Coahuila), Luis A. Castro(Sonora Institute of Technology)
17:00-17:10 Closing

Welcome to HASCA2023

Welcome to HASCA2023 Web site!

HASCA2023 is an eleventh International Workshop on Human Activity Sensing Corpus and Applications. The workshop will be held in conjunction with UbiComp/ISWC2023.

Important Dates
Submission Deadline: June 12th, 2023 (Extended)June 5th, 2023
Acceptance Notification: June 30th, 2023
Camera-ready: July 10th, 2023
Workshop: Octobar 8th, 2023

Notice This year, the venue of HASCA 2023 workshop will be Cancun, Mexico.


The recognition of complex and subtle human behaviors from wearable sensors will enable next-generation human-oriented computing in scenarios of high societal value (e.g., dementia care). This will require large-scale human activity corpora and improved methods to recognize activities and the context in which they occur. This workshop deals with the challenges of designing reproducible experimental setups, running large-scale dataset collection campaigns, designing activity and context recognition methods that are robust and adaptive, and evaluating systems in the real world. We wish to reflect on future methods, such as lifelong learning approaches that allow open-ended activity recognition. The objective of this workshop is to share the experiences among current researchers around the challenges of real-world activity recognition, the role of datasets and tools, and breakthrough approaches towards open-ended contextual intelligence.

The objective of this workshop is to share the experiences among current researchers around the challenges of real-world activity recognition, the role of datasets and tools, and breakthrough approaches towards open-ended contextual intelligence. We expect the following domains to be relevant contributions to this workshop (but not limited to):

Data collection / Corpus construction

Experiences or reports from data collection and/or corpus construction projects, such as papers describing the formats, styles or methodologies for data collection. Cloud- sourcing data collection or participatory sensing also could be included in this topic.

Effectiveness of Data / Data Centric Research

There is a field of research based on the collected corpus, which is called “Data Centric Research”. Also, we solicit of the experience of using large-scale human activity sensing corpus. Using large-scape corpus with machine learning, there will be a large space for improving the performance of recognition results.

Tools and Algorithms for Activity Recognition

If we have appropriate and suitable tools for management of sensor data, activity recognition researchers could be more focused on their research theme. However, development of tools or algorithms for sharing among the research community is not much appreciated. In this workshop, we solicit development reports of tools and algorithms for forwarding the community.

Real World Application and Experiences

Activity recognition "in the Lab" usually works well. However, it is not true in the real world. In this workshop, we also solicit the experiences from real world applications. There is a huge gap/valley between "Lab Envi- ronment" and "Real World Environment". Large scale human activity sensing corpus will help to overcome this gap/valley.

Sensing Devices and Systems

Data collection is not only performed by the "off the shelf" sensors. There is a requirement to develop some special devices to obtain some sort of information. There is also a research area about the development or evaluate the system or technologies for data collection.

Mobile experience sampling, experience sampling strategies:

Advances in experience sampling ap- proaches, for instance intelligently querying the user or using novel devices (e.g. smartwatches) are likely to play an important role to provide user-contributed annotations of their own activities.

Unsupervised pattern discovery

Discovering mean- ingful repeating patterns in sensor data can be fundamental in informing other elements of a system generating an activity corpus, such as inquiring user or triggering annotation crowd sourcing.

Dataset acquisition and annotation through crowd-sourcing, web-mining

A wide abundance of sensor data is potentially in reach with users instrumented with their mobile phones and other wearables. Capitalizing on crowd-sourcing to create larger datasets in a cost effective manner may be critical to open-ended activity recognition. Online datasets could also be used to bootstrap recognition models.

Transfer learning, semi-supervised learning, lifelong learning

The ability to translate recognition mod- els across modalities or to use minimal supervision would allow to reuse datasets across domains and reduce the costs of acquiring annotations.