AI, VR/MR
Given the current world in which our customers' needs are constantly changing, it is essential to continually take on challenges in new areas. At IntaSect, we are committed to creating innovative solutions and providing optimal services to our customers through such challenges.
The use of cutting-edge artificial intelligence technologies, such as machine learning and generative AI, is also part of our approach. By utilizing these technologies, we are able to address complex issues and come up with effective and efficient solutions. We are sensitive to the evolution of technology and actively introduce new methods and tools to maintain our competitive edge.

Contents
Initiatives with Yamaguchi University AISMEC
Yamaguchi University established the AI Systems Medicine Research and Training Center (AISMEC) in 2018, to promote both AI and systems biology at the Graduate School of Medicine and the Yamaguchi University Hospital. Through collaborative research with Yamaguchi University, IntaSect is building systems and services using various advanced technologies such as AI and VR/MR.
Aiming to Implement Medical AI in Clinical Settings
Particularly, in the area of AI-related research, we are developing a medical information / AI integrated system with the aim of implementing medical AI in clinical settings. This will serve as a framework for linking medical AI to medical information systems, including electronic medical record systems and clinical decision support systems (CDSS).

For example, in research to verify drugs causing adverse reactions, a system was established to predict the drugs responsible based on prescriptions and injections in the past six months administered to patients with “suspected adverse reactions,” and notify them on the medical information system.
In the future, we will continue to integrate other types of medical AI developed at Yamaguchi University Graduate School of Medicine and Yamaguchi University Hospital into the coordinated system, Based on the knowledge gained from the process, we aim to further standardize the procedures for integrating new medical AI into the system and the common framework to be used in this process.
Using of hand recognition in video
We are developing software that uses Media Pipe Hands to detect finger joints movements in videos of hand movements based on a machine learning model, to record or play back the estimated 3D spatial coordinates of each hand and finger joint as a time series, with the aim of applying it to estimating the type of disorder caused by nerve damage along the arm based on abnormal hand movements.

Recording of Hand and Finger Joint Coordinates
The software allows the setting of frame skip intervals and batch processing of multiple videos.
Loading and Displaying Coordinate Data
Time-series data of hand and finger coordinates are read, and a hand stick model based on the data can be superimposed to the video and displayed. This is useful for intuitive confirmation of data after processing, such as for removing noise from coordinate data, and for creating presentation materials.
Using of VR (virtual reality)/MR (mixed reality)
We are developing a system that allows multiple physicians to share 3D models of the human body and organs using VR/MR technology, with the aim of allowing them to review surgical procedures prior to, e.g., surgery. 3D models can be displayed/operated in virtual space using Meta Quest 2 or HoloLens 2, while people without the devices can observe them on a PC monitor or iPad.

Creating Session
Multiple sessions can be created in parallel. Participants log in by designating a session and share 3D models accordingly.
Manipulating 3D Models
Participants in a session can select a 3D model to be shared, and perform linear movements, rotation, scaling, and other operations. Participants can also share more detailed locations by pointing to specific locations on the 3D model by placing markers.
Device-Specific Display and Control
When participants in a session use Meta Quest 2, they can manipulate the 3D model using the Meta Quest 2 controller. When participants use HoloLens 2, they can manipulate the 3D model through hand recognition. Even if participants do not have VR devices, they can share the session by displaying it on a PC monitor or tablet.
Switching Perspectives
Participants in a session can switch between “Individual Viewpoint Mode,” in which each participant has his or her own viewpoint, and “Viewpoint Sharing Mode,” in which all participants share a particular participant’s viewpoint.
New AI OCR
We developed a new AI OCR based on the paper “TrOCR: Transformer-Based Optical Character Recognition with Pre-Trained Models ” published in 2021, which was developed based on a customer’s inquiry about AI OCR for handwriting recognition. We leveraged advanced implementation based on this paper, and incorporating the client’s own training data will further enhance the performance of the model.
New AI OCR Features
1. Adoption of TrOCR Technology
Our AI OCR deploys TrOCR technology based on the latest paper published in 2021. The Transformer-based model has demonstrated excellent performance in natural language processing, and also provides high accuracy in handwriting recognition.
2. Use of Proprietary Training Data
By incorporating hand writing provided by the customer as training data, the model can be optimized for specific industries and applications to achieve high generalization performance. This enables the creation of personalized AI OCR tailored to customer needs..
3. Flexible Customization
Our AI OCR is designed to be flexible and scalable, allowing users to customize the model and functions according to their requirements. It is also available on-premises compared to conventional AI OCR which is a cloud-based service. This enables us to provide customized OCR solutions for a variety of industries and business environments.

The current base model was created using a total of 2.5 million training data samples, using multiple fonts for the following data.
- Japanese: First names + Last names (54,000)
- European: First names + Last names (30,000)
- 230,000 Chinese sentences
- Approx. 6,000 Chinese characters in everyday use
- About 2,600 Japanese kanji characters in everyday use announced by the Agency for Cultural Affairs
- 1.35 million Japanese Wikipedia pages as of January 1, 2023