{"id":1003,"date":"2023-07-11T11:40:42","date_gmt":"2023-07-11T02:40:42","guid":{"rendered":"https:\/\/www.elst.okayama-u.ac.jp\/en\/?page_id=1003"},"modified":"2025-06-09T14:26:19","modified_gmt":"2025-06-09T05:26:19","slug":"areas02_comp","status":"publish","type":"page","link":"https:\/\/www.elst.okayama-u.ac.jp\/en\/education\/mc-suuri\/areas02_comp\/","title":{"rendered":"Pattern Information Processing"},"content":{"rendered":"<h4>&nbsp;<\/h4>\n<p><img loading=\"lazy\" decoding=\"async\" class=\" wp-image-69 aligncenter\" src=\"https:\/\/www.elst.okayama-u.ac.jp\/up_load_files\/areas\/areas02_comp\/theme.png\" alt=\"\" width=\"479\" height=\"395\"><\/p>\n<p>Our research interests include basic theories of pattern recognition and understanding, and applied fields of visual information processing, language information processing, and speech information processing. As research on Pattern Information Processing, we apply methods from neuroscience and artificial intelligence such as machine learning, statistics, artificial intelligence, and data mining to design appropriate feature representations and discriminative models for images, videos, texts, and speeches.<\/p>\n<table class=\"el_table_fix\" style=\"width: 100%\">\n<tbody>\n<tr>\n<th style=\"width: 20%\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium\" src=\"https:\/\/www.elst.okayama-u.ac.jp\/up_load_files\/areas\/areas02_comp\/OKABE Takahiro.png\" width=\"320\" height=\"408\"><\/th>\n<td style=\"width: 80%\">\n<ul>\n<li style=\"list-style-type: none;font-size: 12px;margin-left: -20px\"><span style=\"font-size: 12pt\">Prof. OKABE Takahiro<\/span><\/li>\n<li style=\"list-style-type: none;font-size: 12px;margin-left: -20px\"><span style=\"font-size: 12pt\">E-mail: okabe<span style=\"color: #0000ff\"> [at]<\/span> okayama-u.ac.jp<\/span><\/li>\n<li style=\"list-style-type: none;font-size: 12px;margin-left: -20px\"><span style=\"font-size: 12pt\">computer vision, computational photography, image processing, computer graphics<\/span><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"font-size: 12pt\"><a class=\"el_linkBtn\" href=\"https:\/\/soran.cc.okayama-u.ac.jp\/html\/5ed8f50cb15c62df3c8f9ec0ed19d09f_en.html\">Directory of Researchers<\/a> \u3000<a class=\"el_linkBtn\" href=\"https:\/\/www.cc.okayama-u.ac.jp\/vi\/okabe.en.html\">Link to group homepage<\/a><\/span><\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<\/p>\n<div style=\"width: 276px\" class=\"wp-caption alignright\"><a href=\"https:\/\/www.elst.okayama-u.ac.jp\/up_load_files\/areas\/areas02_comp\/img_OKABE.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.elst.okayama-u.ac.jp\/up_load_files\/areas\/areas02_comp\/img_OKABE.png\" alt=\"\" width=\"266\" height=\"203\"><\/a><p class=\"wp-caption-text\">Image understanding, recognition, and synthesis based on light fields<\/p><\/div>\n<p>We are studying technologies for image understanding, recognition, and synthesis in the fields of computer vision, computational photography, and computer graphics. Specifically, to capture and utilize the rich information contained in light fields, we research and develop methodologies that combine cutting-edge cameras and lighting systems with mathematical models and machine learning.<\/p>\n<p>&nbsp;<\/p>\n<table class=\"el_table_fix\" style=\"width: 100%\">\n<tbody>\n<tr>\n<th style=\"width: 20%\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium\" src=\"https:\/\/www.elst.okayama-u.ac.jp\/up_load_files\/areas\/areas02_comp\/AKASHI_Takuya.jpg\" width=\"320\" height=\"408\"><\/th>\n<td style=\"width: 80%\">\n<ul>\n<li style=\"list-style-type: none;font-size: 12px;margin-left: -20px\"><span style=\"font-size: 12pt\">Prof. AKAHSHI Takuya<\/span><\/li>\n<li style=\"list-style-type: none;font-size: 12px;margin-left: -20px\"><span style=\"font-size: 12pt\">E-mail: akashi<span style=\"color: #0000ff\"> [at]<\/span> okayama-u.ac.jp<\/span><\/li>\n<li style=\"list-style-type: none;font-size: 12px;margin-left: -20px\"><span style=\"font-size: 12pt\">artificial intelligence, computer vision, neuroscience, image recognition, human interface<\/span><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"font-size: 12pt\"><a class=\"el_linkBtn\" href=\"https:\/\/soran.cc.okayama-u.ac.jp\/html\/4f7a540511d18c34d077bc38d49d4023_en.html#\">Directory of Researchers<\/a> \u3000<\/span><\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<\/p>\n<p><a href=\"https:\/\/www.elst.okayama-u.ac.jp\/up_load_files\/areas\/areas02_comp\/img(E)_AKASHI.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignright\" src=\"https:\/\/www.elst.okayama-u.ac.jp\/up_load_files\/areas\/areas02_comp\/img(E)_AKASHI.jpg\" alt=\"\" width=\"266\" height=\"203\"><\/a><\/p>\n<p>This laboratory aims to achieve mutual development by combining research outcomes from the fields of neuroscience and AI, such as image generation and motion image recognition. Additionally, through initiatives like medical-engineering collaboration and DX of local companies, one of our goals is to contribute to the SDGs.<\/p>\n<p>&nbsp;<\/p>\n<table class=\"el_table_fix\" style=\"width: 100%\">\n<tbody>\n<tr>\n<th style=\"width: 20%\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium\" src=\"https:\/\/www.elst.okayama-u.ac.jp\/up_load_files\/areas\/areas02_comp\/takeuchi_koichi.png\" width=\"320\" height=\"408\"><\/th>\n<td style=\"width: 80%\">\n<ul>\n<li style=\"list-style-type: none;font-size: 12px;margin-left: -20px\"><span style=\"font-size: 12pt\">Assoc. Prof. TAKEUCHI Koichi<\/span><\/li>\n<li style=\"list-style-type: none;font-size: 12px;margin-left: -20px\"><span style=\"font-size: 12pt\">E-mail: <span id=\"content_ousdb1_lp$002fitem$005b22$005d$002fsys_mail\">takeuc-k<span style=\"color: #0000ff\"> [at]<\/span> <\/span>okayama-u.ac.jp<\/span><\/li>\n<li style=\"list-style-type: none;font-size: 12px;margin-left: -20px\"><span style=\"font-size: 12pt\">natural language processing, deep neural network model, large language model<br \/>\n<\/span><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"font-size: 12pt\"><a class=\"el_linkBtn\" href=\"https:\/\/soran.cc.okayama-u.ac.jp\/html\/b781a62e72bfec26_en.html\">Directory of Researchers<\/a> \u3000<a class=\"el_linkBtn\" href=\"http:\/\/www.cl.cs.okayama-u.ac.jp\">Link to group homepage<\/a><\/span><\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<\/p>\n<p><a href=\"https:\/\/www.elst.okayama-u.ac.jp\/up_load_files\/areas\/areas02_comp\/img1.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignright\" src=\"https:\/\/www.elst.okayama-u.ac.jp\/up_load_files\/areas\/areas02_comp\/img1.png\" alt=\"\" width=\"266\" height=\"203\"><\/a><\/p>\n<p>As an application of natural language processing, automated essay scoring is conducted. Evaluating student essays can be labor-intensive for human graders with arising issues of consistency in evaluations. To address these issues, we have developed an automated essay scoring model and a supporting tool for the essay grading process. By utilizing machine learning techniques, the target of this research is to establish a more reliable and efficient approach for scoring essays. Since Japanese essay datasets have been limited, we have constructed and provided a dataset of scored Japanese essays to the research community.<\/p>\n<p>&nbsp;<\/p>\n<table class=\"el_table_fix\" style=\"width: 100%\">\n<tbody>\n<tr>\n<th style=\"width: 20%\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium\" src=\"https:\/\/www.elst.okayama-u.ac.jp\/up_load_files\/areas\/areas02_comp\/HARA_Sunao.png\" width=\"320\" height=\"408\"><\/th>\n<td style=\"width: 80%\">\n<ul>\n<li style=\"list-style-type: none;font-size: 12px;margin-left: -20px\"><span style=\"font-size: 12pt\">Assoc. Prof. HARA Sunao<\/span><\/li>\n<li style=\"list-style-type: none;font-size: 12px;margin-left: -20px\"><span style=\"font-size: 12pt\">E-mail:hara<span style=\"color: #0000ff\"> [at]<\/span> okayama-u.ac.jp<\/span><\/li>\n<li style=\"list-style-type: none;font-size: 12px;margin-left: -20px\"><span style=\"font-size: 12pt\">Speech processing, Signal processing, Spoken dialog system, Lifelogs, Multimodal information processing<\/span><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"font-size: 12pt\"><a class=\"el_linkBtn\" href=\"https:\/\/soran.cc.okayama-u.ac.jp\/html\/470e6ea82b6972d8_en.html\">Directory of Researchers<\/a> \u3000<a class=\"el_linkBtn\" href=\"https:\/\/www.sp.cs.okayama-u.ac.jp\/\">Link to group homepage<\/a><\/span><\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<\/p>\n<div style=\"width: 276px\" class=\"wp-caption alignright\"><a href=\"https:\/\/www.elst.okayama-u.ac.jp\/up_load_files\/areas\/areas02_comp\/img(E)_HARA.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.elst.okayama-u.ac.jp\/up_load_files\/areas\/areas02_comp\/img(E)_HARA.png\" alt=\"\" width=\"266\" height=\"203\"><\/a><p class=\"wp-caption-text\">Research topics<\/p><\/div>\n<p>Our goal is to develop technologies to enable advanced communication between humans and machines, such as spoken dialogue interfaces and action recognition from biological signals. To achieve this goal, we are researching methods that can represent and process multimodal-information sources based on speech processing, signal prcessing, and machine learning.<\/p>\n<p>&nbsp;<\/p>\n<table class=\"el_table_fix\" style=\"width: 100%\">\n<tbody>\n<tr>\n<th style=\"width: 20%\">&nbsp;<\/th>\n<td style=\"width: 80%\">\n<ul>\n<li style=\"list-style-type: none;font-size: 12px;margin-left: -20px\"><span style=\"font-size: 12pt\">Asst. Prof. YOSHIDA Michutaka<\/span><\/li>\n<li style=\"list-style-type: none;font-size: 12px;margin-left: -20px\"><span style=\"font-size: 12pt\">E-mail:michitaka-yoshida<span style=\"color: #0000ff\">&nbsp;[at]<\/span> okayama-u.ac.jp<\/span><\/li>\n<li style=\"list-style-type: none;font-size: 12px;margin-left: -20px\"><span style=\"font-size: 12pt\">Computer vision, Computational photography, Compressive sensing<\/span><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"font-size: 12pt\"><a class=\"el_linkBtn\" href=\"https:\/\/soran.cc.okayama-u.ac.jp\/html\/3342c1aa465ddb83587ce9df1975c665_en.html\">Directory of Researchers<\/a> \u3000<a class=\"el_linkBtn\" href=\"https:\/\/michitakayoshida.github.io\/index_en.html\">Link to group homepage<\/a><\/span><\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Light carries rich information about space, time, and color. A camera can capture various types of real-world data. However, if post-processing is assumed, instead of optimizing the quality and amount of information in the captured image, the focus shifts to optimizing the quality and accuracy of the final processed image. This approach, known as computational photography, allows for more efficient extraction of diverse information.<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>&nbsp; Our research interests include basic theories of pattern recognition and understanding, and applied fie [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":0,"parent":140,"menu_order":28,"comment_status":"closed","ping_status":"closed","template":"template-layout_single.php","meta":{"footnotes":""},"class_list":["post-1003","page","type-page","status-publish","hentry"],"aioseo_notices":[],"publishpress_future_action":{"enabled":false,"date":"2026-04-29 01:29:27","action":"change-status","newStatus":"draft","terms":[],"taxonomy":"","extraData":[]},"publishpress_future_workflow_manual_trigger":{"enabledWorkflows":[]},"_links":{"self":[{"href":"https:\/\/www.elst.okayama-u.ac.jp\/en\/wp-json\/wp\/v2\/pages\/1003","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.elst.okayama-u.ac.jp\/en\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.elst.okayama-u.ac.jp\/en\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.elst.okayama-u.ac.jp\/en\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/www.elst.okayama-u.ac.jp\/en\/wp-json\/wp\/v2\/comments?post=1003"}],"version-history":[{"count":15,"href":"https:\/\/www.elst.okayama-u.ac.jp\/en\/wp-json\/wp\/v2\/pages\/1003\/revisions"}],"predecessor-version":[{"id":3145,"href":"https:\/\/www.elst.okayama-u.ac.jp\/en\/wp-json\/wp\/v2\/pages\/1003\/revisions\/3145"}],"up":[{"embeddable":true,"href":"https:\/\/www.elst.okayama-u.ac.jp\/en\/wp-json\/wp\/v2\/pages\/140"}],"wp:attachment":[{"href":"https:\/\/www.elst.okayama-u.ac.jp\/en\/wp-json\/wp\/v2\/media?parent=1003"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}