[{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/blog/","section":"Blog","summary":"","title":"Blog"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/gnn/","section":"Tags","summary":"","title":"GNN"},{"content":"Master\u0026rsquo;s student in Computer Science and Engineering at Toyohashi University of Technology (Uehara Lab).\nInterested in how AI can bring new insights to neuroscience — analyzing brain networks as graphs with GNNs, and using counterfactual explanations to turn model predictions into actionable insights for clinical use. Also exploring how to extract and visualize the knowledge that AI models acquire internally.\n","date":null,"permalink":"https://jhonda.ochakumi.com/","section":"Junya G. Honda","summary":"","title":"Junya G. Honda"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/representation-analysis/","section":"Tags","summary":"","title":"Representation Analysis"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/research/","section":"Tags","summary":"","title":"Research"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/","section":"Tags","summary":"","title":"Tags"},{"content":"I work on two things that look unrelated at first glance: analyzing neural data with graph neural networks, and exploring how AI models represent concepts internally.\nThey\u0026rsquo;re not separate interests. They depend on each other.\nUsing AI to understand the brain #Brains are networks. Neurons fire, synchronize, form patterns — and somewhere in that activity, seizures happen. We can build a graph from EEG data, train a GNN to classify seizures, and get high accuracy. But accuracy alone doesn\u0026rsquo;t help clinicians. The model is a black box, and clinical adoption requires knowing why a prediction was made.\nThis is where explainable AI comes in — but not all explanations are equal. I\u0026rsquo;m particularly interested in counterfactual explanations: \u0026ldquo;what would need to change for the prediction to flip?\u0026rdquo; This is fundamentally different from saliency maps or feature importance. It\u0026rsquo;s actionable — a clinician can reason about what a counterfactual means in a way that a heatmap doesn\u0026rsquo;t afford.\nIn practice, though, it\u0026rsquo;s harder than it sounds. You can generate counterfactual graphs, but interpreting what those changes mean neuroscientifically — that\u0026rsquo;s where the real difficulty begins.\nUnderstanding AI to use it better #So you\u0026rsquo;re using a complex system (a GNN) to analyze another complex system (a brain). But if you don\u0026rsquo;t understand what the GNN is doing internally, how much can you trust what it tells you about the brain?\nThis is why I also work on representation analysis. Language models and neural networks map inputs into high-dimensional spaces, and the structure of those spaces reflects something about what the model has learned. With Trendscape, we build neighbor graphs in embedding spaces and trace paths between concepts — exploring not the concepts themselves, but the space that holds them. How are things arranged? What connects them?\nThe answer isn\u0026rsquo;t a number. It\u0026rsquo;s a visualization that a human looks at and finds meaning in — or doesn\u0026rsquo;t.\nBack and forth #The two directions feed each other. To learn something about the brain, I need AI tools I can trust. To trust those tools, I need to understand what\u0026rsquo;s happening inside them. Neither side is the foundation — I\u0026rsquo;m moving between them, using each to make progress on the other.\nWhat I still don\u0026rsquo;t know #These questions don\u0026rsquo;t stay inside computer science. What is a concept? When we say a model \u0026ldquo;represents\u0026rdquo; something, what does that mean — understanding, or statistical regularity? And counterfactual reasoning itself is philosophically contested. There are serious objections to counterfactual accounts of causation, and I can\u0026rsquo;t hand-wave those away because my model produces outputs that look useful.\nI\u0026rsquo;m an engineer working at the edge of philosophy, neuroscience, and cognitive science. I have a lot more to learn — about causation, about what concepts really are, about whether \u0026ldquo;does AI understand?\u0026rdquo; is even a well-formed question. But I think these are questions worth sitting with, and I\u0026rsquo;d rather keep building while staying honest about what I don\u0026rsquo;t know.\n","date":"April 2026","permalink":"https://jhonda.ochakumi.com/blog/what-im-trying-to-figure-out/","section":"Blog","summary":"今おもしろいと思っていることについて ── GNNで脳のネットワークを分析する話と，AIモデルの内部を覗く話","title":"What I'm Trying to Figure Out"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/xai/","section":"Tags","summary":"","title":"XAI"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/counterfactual-explanation/","section":"Tags","summary":"","title":"Counterfactual Explanation"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/eeg/","section":"Tags","summary":"","title":"EEG"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/epilepsy/","section":"Tags","summary":"","title":"Epilepsy"},{"content":"Co-authored # 高山 裕太郎, Karim Mithani, 上原 一将, 本田 純也, 北城 圭一, 岩崎 真樹, 山本 哲哉, Ayako Ochi, Hiroshi Otsubo, George M Ibrahim「焦点性てんかんにおける発作時意識保持機構の解明 -視床CM核-皮質ネットワークへの着目-」第55回日本臨床神経生理学会, 2025年11月, 沖縄\n高山 裕太郎, Karim Mithani, 上原 一将, 本田 純也, 北城 圭一, 岩崎 真樹, 山本 哲哉, Ayako Ochi, Hiroshi Otsubo, George M Ibrahim「焦点性てんかんにおける発作時意識保持機構の解明 -視床CM核-皮質ネットワークへの着目-」第58回日本てんかん学会, 2025年10月, 栃木\n","date":null,"permalink":"https://jhonda.ochakumi.com/research/","section":"Research","summary":"","title":"Research"},{"content":"Venue #第28回日本ヒト脳マッピング学会 (JHBM 2026), LBA011, 姫路\nAuthors #Junya G. Honda (TUT), Yutaro Takayama (Yokohama City Univ. Hospital), Kazumasa Uehara (TUT)\nEnglish Title #Classification of epileptic seizures using graph neural network combined with counterfactual explanation\nてんかんの発作検出には専門医による長時間の脳波判読を要し，深層学習による自動検出が研究されているが，予測根拠のブラックボックス化が臨床導入の障壁となっている．この研究では，頭皮脳波に対してグラフニューラルネットワーク（GNN）を用いた発作分類モデルに反実仮想説明を加えるアプローチを提案した．CHB-MITデータベース24名分の頭皮脳波から，18電極×6周波数帯域のパワー値をノード特徴量，電極間の位相同期度をエッジ特徴量としてグラフを構築し，GNNで発作・非発作の2値分類を行った上で，反実仮想説明手法COMBINEX を適用した．結果として，予測の反転にはノード特徴量の変更のみで十分であり，モデルはネットワーク構造よりも各電極局所の特徴を重視していることが示唆された．また，発作→非発作の変更パターンは一貫した傾向を示す一方，非発作→発作のパターンはより多様であることが認められた．\n","date":"March 2026","permalink":"https://jhonda.ochakumi.com/research/jhbm2026-gnn-epilepsy/","section":"Research","summary":"CHB-MIT頭皮脳波24名分にGNNで発作分類し，反実仮想説明COMBINEXを適用．予測反転にはノード特徴量の変更のみで十分であり，モデルがネットワーク構造より局所特徴を重視していることを示唆","title":"グラフニューラルネットワークと反実仮想説明を用いたてんかん発作分類"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/communication-topology/","section":"Tags","summary":"","title":"Communication Topology"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/deliberation/","section":"Tags","summary":"","title":"Deliberation"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/diversity/","section":"Tags","summary":"","title":"Diversity"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/llm/","section":"Tags","summary":"","title":"LLM"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/multi-agent/","section":"Tags","summary":"","title":"Multi-Agent"},{"content":"Venue #言語処理学会第32回年次大会 (NLP 2026), B4-21\nAuthors #Junya G. Honda (TUT), Kotaro Sakamoto, Jumpei Ukita, Takeshi Kojima, Yusuke Iwasawa, Yutaka Matsuo (UTokyo)\n複数の大規模言語モデル（LLM）エージェントに議論させるマルチエージェント議論は，単一モデルの誤りを相互に訂正できる一方，「強い単一エージェント＋多数決」を上回らない場合も報告されている．議論のプロトコルを固定したまま通信トポロジと議論の深さのみを制御し，正解率と多様性の時間発展を系統的に比較した．リング，スター，完全グラフ，Erdős–Rényi，スモールワールド，スケールフリーの6種を対象に，各ラウンドで近傍要約を共有しつつ最終解を出力して多数決で集約した．語彙指標（n-gramエントロピー等）と埋め込み空間における分散を解析すると，(1) 疎なリングは混合が遅く初期改善が鈍い，(2) ハブを持つグラフは1–2ラウンドで高精度に達しやすい，(3) 語彙多様性と意味多様性は乖離し得る，という傾向が得られた．さらに，日本語タスクへ置き換える際の指標設計上の注意点を議論する．\n","date":"March 2026","permalink":"https://jhonda.ochakumi.com/research/nlp2026-mad-topology/","section":"Research","summary":"リング・スター・完全グラフ等6種のトポロジを系統的に比較し，ハブ型の速い合意収束と語彙多様性・意味多様性の乖離を確認","title":"通信トポロジは熟議を左右する：マルチエージェント LLM における正解率と多様性の分析"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/conversational-ai/","section":"Tags","summary":"","title":"Conversational AI"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/dementia/","section":"Tags","summary":"","title":"Dementia"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/line/","section":"Tags","summary":"","title":"LINE"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/projects/","section":"Projects","summary":"","title":"Projects"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/re-mentia/","section":"Tags","summary":"","title":"Re-MENTIA"},{"content":"About #Tomori is an AI-powered care companion developed by Re-MENTIA to assist people with dementia — particularly those experiencing behavioral and psychological symptoms (BPSD) such as wandering and confusion. Built on LLM-based dialogue agents, it provides personalized responses and support through voice interaction. The platform also connects families through LINE, sharing updates and enabling remote monitoring.\nRole #Founding member of Re-MENTIA (joined March 2025). Engineer on Tomori development.\nRecognition # IPA 未踏アドバンスト 2025上期 — 「認知症向け仮想ヘルパーの開発と自立支援プラットフォームの創出」石黒浩PM，採択・修了． Links # Re-MENTIA Tomori IPA プロジェクト概要 ","date":"April 2025","permalink":"https://jhonda.ochakumi.com/projects/tomori/","section":"Projects","summary":"認知症の方の自立支援を目的とした会話型AIコンパニオン．LINEを通じた対話で利用者を理解し，家族とのつながりも支援する．Re-MENTIA 立ち上げメンバーとして開発に参加．未踏アドバンスト 2025上期 採択・修了（石黒PM）．","title":"Tomori — AI Companion for Dementia Care"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/concept-exploration/","section":"Tags","summary":"","title":"Concept Exploration"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/latent-space/","section":"Tags","summary":"","title":"Latent Space"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/sae/","section":"Tags","summary":"","title":"SAE"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/trendscape/","section":"Tags","summary":"","title":"Trendscape"},{"content":"Venue #言語処理学会第31回年次大会 (NLP 2025), P2-25\nAuthors #Junya G. Honda (TUT), Kotaro Sakamoto (UTokyo), Shiro Takagi (Independent Researcher), Yusuke Hayashi (AI Alignment Network), Shuhei Ogawa (Emosta), Yutaka Matsuo (UTokyo)\n言語モデル内の潜在的な概念空間を探索・可視化するための手法 TrendScape 1.0 を紹介する．自然言語を潜在空間にマッピングして近傍グラフを構築し，グラフ上の経路探索を通じて概念間の関係を調べる．文学作品間の概念経路を可視化し，得られたネットワークを分析することで，言語モデルの概念理解に関する洞察を提供する．\nSee Also # 概念間探索手法 (YANS 2024) Trendscape Project ","date":"March 2025","permalink":"https://jhonda.ochakumi.com/research/nlp2025-trendscape/","section":"Research","summary":"自然言語を潜在空間にマッピングし近傍グラフを構築，文学作品間の概念経路の探索・可視化を通じた言語モデルの概念理解の分析","title":"Trendscape 1.0: 言語モデルの潜在空間上の概念探索"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/mechanistic-interpretability/","section":"Tags","summary":"","title":"Mechanistic Interpretability"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/word-embeddings/","section":"Tags","summary":"","title":"Word Embeddings"},{"content":"Venue #NLP若手の会 第19回シンポジウム (YANS 2024), S5-P02\nAuthors #Junya G. Honda (TUT / Emosta), Shuhei Ogawa (Emosta), Kotaro Sakamoto (UTokyo)\n言語モデルは人と同様に概念を獲得しているのか？ この研究では，概念同士のつながりを可視化し，モデルが内部で概念をどう解釈しているかを明らかにすることを目指す．入力を埋め込み空間に対応づけて保存し，概念間の探索空間をサンプリングしてネットワークを構築，経路探索と近傍の再サンプリングを通じて概念間の関係を可視化する手法を提案した．Word2Vec（chiVe）を用いた青空文庫3作品の構造分析と，SAE（Sparse AutoEncoder）を用いたLLM内部の特徴量による概念探索の2つのアプローチで検証を行った．\nSee Also # Trendscape 1.0 (NLP 2025) Trendscape Project ","date":"September 2024","permalink":"https://jhonda.ochakumi.com/research/yans2024-concept-exploration/","section":"Research","summary":"Word2Vec（chiVe）による青空文庫の構造分析と，SAEによるLLM内部の特徴量探索の2アプローチで概念間の関係を可視化","title":"単語分散表現モデルの埋め込み空間を用いた概念間探索手法の構築と大規模言語モデルの機械論的解釈可能性への応用"},{"content":"About #Trendscape is a toolkit for exploring inter-concept relationships on language model latent spaces. Natural language inputs are mapped to embedding spaces where neighbor graphs are constructed, and path discovery across these graphs reveals how concepts relate and connect. Verified with Word2Vec (chiVe) on literary works and Sparse AutoEncoder (SAE) features from LLM internals.\nPresentations # YANS 2024 [S5-P02] — 単語分散表現モデルの埋め込み空間を用いた概念間探索手法の構築と大規模言語モデルの機械論的解釈可能性への応用 NLP 2025 [P2-25] — Trendscape 1.0: 言語モデルの潜在空間上の概念探索 Tech Stack #Python, Sentence-Transformers, Polars, marimo, Plotly\nSee Also # 概念間探索手法 (YANS 2024) Trendscape 1.0 (NLP 2025) ","date":"May 2024","permalink":"https://jhonda.ochakumi.com/projects/trendscape/","section":"Projects","summary":"言語モデルの潜在空間上で概念間関係を探索するツール．埋め込み空間に近傍グラフを構築し，経路探索を通じて概念のつながりを可視化する．YANS 2024, NLP 2025 で発表．","title":"Trendscape — Concept Exploration on Latent Spaces"},{"content":"","date":null,"permalink":"https://jhonda.ochakumi.com/tags/visualization/","section":"Tags","summary":"","title":"Visualization"},{"content":"","date":"January 0001","permalink":"https://jhonda.ochakumi.com/archive/","section":"Junya G. Honda","summary":"","title":"Archive"},{"content":"Education # Year Institution 2025– Master\u0026rsquo;s in Computer Science and Engineering, Toyohashi University of Technology (TUT). Uehara Lab. Global Technology Architects Course (GAC). 2023–2025 Bachelor\u0026rsquo;s in Computer Science and Engineering, Toyohashi University of Technology (TUT). Uehara Lab (Apr 2024–). Transfer from KOSEN. Global Technology Architects Course (GAC). 2018–2023 Information, Communication, and Electronics Engineering, National Institute of Technology (KOSEN), Kumamoto College (熊本高等専門学校). Shintani Lab (2022–2023). Research Interests #Neural Data Analysis (iEEG/EEG), Explainable AI (XAI), Graph Neural Networks (GNN), Multi-Agent LLM Systems, Representation Analysis\nPublications \u0026amp; Presentations #First Author # [B4-21] 通信トポロジは熟議を左右する：マルチエージェント LLM における正解率と多様性の分析. J.G. Honda, K. Sakamoto, J. Ukita, T. Kojima, Y. Iwasawa, Y. Matsuo. NLP 2026, Mar 2026. [LBA011] グラフニューラルネットワークと反実仮想説明を用いたてんかん発作分類. J.G. Honda, Y. Takayama, K. Uehara. JHBM 2026, Mar 2026, Himeji. [P2-25] Trendscape 1.0: 言語モデルの潜在空間上の概念探索. J.G. Honda, K. Sakamoto, S. Takagi, Y. Hayashi, S. Ogawa, Y. Matsuo. NLP 2025, Mar 2025. [S5-P02] 単語分散表現モデルの埋め込み空間を用いた概念間探索手法の構築と大規模言語モデルの機械論的解釈可能性への応用. J.G. Honda, S. Ogawa, K. Sakamoto. YANS 2024, Sep 2024. Co-authored # Y. Takayama, K. Mithani, K. Uehara, J.G. Honda, K. Kitajo, M. Iwasaki, T. Yamamoto, A. Ochi, H. Otsubo, G.M. Ibrahim. 焦点性てんかんにおける発作時意識保持機構の解明 -視床CM核-皮質ネットワークへの着目-. 第55回日本臨床神経生理学会, Nov 2025, 沖縄. Y. Takayama, K. Mithani, K. Uehara, J.G. Honda, K. Kitajo, M. Iwasaki, T. Yamamoto, A. Ochi, H. Otsubo, G.M. Ibrahim. 焦点性てんかんにおける発作時意識保持機構の解明 -視床CM核-皮質ネットワークへの着目-. 第58回日本てんかん学会, Oct 2025, 栃木. Research Experience # Period Position Apr 2024– Uehara Lab, Toyohashi University of Technology — Neural data analysis, GNN, explainable AI Jan–Feb 2024 Research Intern (実務訓練), SickKids Hospital, Toronto, Canada — Epilepsy research with iEEG data 2022–2023 Shintani Lab, National Institute of Technology, Kumamoto College — Time-series data analysis Projects # Tomori — Conversational AI companion for dementia support (Re-MENTIA) Trendscape — Concept exploration tool on language model latent spaces Awards \u0026amp; Grants # IPA 未踏アドバンスト 2025上期 — 「認知症向け仮想ヘルパーの開発と自立支援プラットフォームの創出」 石黒浩PM, 採択・修了. Work Experience # Period Position Mar 2025– Founding Member \u0026amp; Engineer, Re-MENTIA Inc. — AI companion for dementia care Apr 2025– International House Tutor, TUT International Exchange Center Nov 2024– Software Engineer (Contract), Emosta Inc. Contact # GitHub: gomagoma7 LinkedIn: Junya G. Honda ","date":"January 0001","permalink":"https://jhonda.ochakumi.com/cv/","section":"Junya G. Honda","summary":"","title":"CV"}]