sub:assertion {
<
https://doi.org/10.48550/arXiv.2508.15790>
dct:title "KG-o1: Enhancing Multi-hop Question Answering in Large Language Models via Knowledge Graph Integration" ;
<
http://purl.org/spar/cito/describes> <
https://neverblink.eu/ontologies/llm-kg/methods#KGo1> ;
<
http://purl.org/spar/cito/discusses> <
https://neverblink.eu/ontologies/llm-kg/methods#ChatGPT4o> , <
https://neverblink.eu/ontologies/llm-kg/methods#ChatGPT4oMini> , <
https://neverblink.eu/ontologies/llm-kg/methods#DeepSeekR1> , <
https://neverblink.eu/ontologies/llm-kg/methods#GRPO> , <
https://neverblink.eu/ontologies/llm-kg/methods#Gemini20FlashThinking> , <
https://neverblink.eu/ontologies/llm-kg/methods#O1Mini> , <
https://neverblink.eu/ontologies/llm-kg/methods#OpenO1> , <
https://neverblink.eu/ontologies/llm-kg/methods#QwQ32BPreview> ;
a prov:Entity .
<
https://neverblink.eu/ontologies/llm-kg/methods#ChatGPT4o>
a <
http://purl.org/spar/fabio/Workflow> ;
rdfs:comment "ChatGPT-4o is an advanced general-purpose large language model (GPLLM) used in two contexts: first, as a tool to generate multi-hop questions for the KG-MHQA SFT dataset creation, and second, as a powerful baseline for performance comparison in the experiments." ;
rdfs:label "ChatGPT-4o" .
<
https://neverblink.eu/ontologies/llm-kg/methods#ChatGPT4oMini>
a <
http://purl.org/spar/fabio/Workflow> ;
rdfs:comment "ChatGPT4o-mini is an advanced general-purpose large language model (GPLLM) used as a baseline model for comparative evaluation against the proposed KG-o1 models on multi-hop reasoning tasks." ;
rdfs:label "ChatGPT4o-mini" .
<
https://neverblink.eu/ontologies/llm-kg/methods#DeepSeekR1>
a <
http://purl.org/spar/fabio/Workflow> ;
rdfs:comment "DeepSeek-R1 is a specific large reasoning model (LRM) used as a strong baseline for evaluating the performance of KG-o1 models on multi-hop question answering datasets." ;
rdfs:label "DeepSeek-R1" .
<
https://neverblink.eu/ontologies/llm-kg/methods#GRPO>
a <
http://purl.org/spar/fabio/Workflow> ;
rdfs:comment "GRPO is a reinforcement learning method that serves as a comparative baseline in the ablation studies, where its performance in boosting LLMs' multi-hop reasoning is contrasted with other fine-tuning and optimization strategies, including the paper's Self-improved Adaptive DPO." ;
rdfs:label "GRPO" .
<
https://neverblink.eu/ontologies/llm-kg/methods#Gemini20FlashThinking>
a <
http://purl.org/spar/fabio/Workflow> ;
rdfs:comment "Gemini 2.0 Flash Thinking is a specific large reasoning model (LRM) used as a strong baseline for evaluating the performance of KG-o1 models on multi-hop question answering tasks, highlighting its advanced reasoning capabilities." ;
rdfs:label "Gemini 2.0 Flash Thinking" .
<
https://neverblink.eu/ontologies/llm-kg/methods#KGo1>
dct:subject <
https://neverblink.eu/ontologies/llm-kg/categories#KGEnhancedLLMPretraining> ;
a <
http://purl.org/spar/fabio/Workflow> ;
rdfs:comment "KG-o1 is a novel four-stage framework that integrates Knowledge Graphs (KGs) to enhance Large Language Models' (LLMs) multi-hop reasoning abilities. It involves constructing KG-derived datasets (KG-MHQA SFT and DPO) and using them to fine-tune LLMs (via Supervised Fine-Tuning and a \"Self-improved Adaptive DPO\" strategy), aiming to improve the LLM's intrinsic knowledge expression and reasoning capabilities during a training stage by internalizing logical paths." ;
rdfs:label "KG-o1" ;
<
https://neverblink.eu/ontologies/llm-kg/hasTopCategory> <
https://neverblink.eu/ontologies/llm-kg/top-categories#KGEnhancedLLM> .
<
https://neverblink.eu/ontologies/llm-kg/methods#O1Mini>
a <
http://purl.org/spar/fabio/Workflow> ;
rdfs:comment "o1-mini is a specific large reasoning model (LRM) mentioned as a prominent baseline for comparison against the proposed KG-o1 models in multi-hop question answering tasks." ;
rdfs:label "o1-mini" .
<
https://neverblink.eu/ontologies/llm-kg/methods#OpenO1>
a <
http://purl.org/spar/fabio/Workflow> ;
rdfs:comment "Open-o1 is a specific influential large reasoning model (LRM) from the open-source community, used as a baseline for performance comparison in the experiments of the paper." ;
rdfs:label "Open-o1" .
<
https://neverblink.eu/ontologies/llm-kg/methods#QwQ32BPreview>
a <
http://purl.org/spar/fabio/Workflow> ;
rdfs:comment "QwQ-32B-Preview is a specific large reasoning model (LRM) included as a prominent baseline for comparative experiments against the KG-o1 models on multi-hop reasoning tasks." ;
rdfs:label "QwQ-32B-Preview" .
}