sub:assertion {
<
https://doi.org/10.48550/arXiv.2404.07677>
dct:title "ODA: Observation-Driven Agent for integrating LLMs and Knowledge Graphs" ;
<
http://purl.org/spar/cito/describes> <
https://neverblink.eu/ontologies/llm-kg/methods#Oda> ;
<
http://purl.org/spar/cito/discusses> <
https://neverblink.eu/ontologies/llm-kg/methods#CoT> , <
https://neverblink.eu/ontologies/llm-kg/methods#DirectAnsweringGPT35> , <
https://neverblink.eu/ontologies/llm-kg/methods#DirectAnsweringGPT4> , <
https://neverblink.eu/ontologies/llm-kg/methods#Raco> , <
https://neverblink.eu/ontologies/llm-kg/methods#Rag> , <
https://neverblink.eu/ontologies/llm-kg/methods#Re2G> , <
https://neverblink.eu/ontologies/llm-kg/methods#SelfConsistency> , <
https://neverblink.eu/ontologies/llm-kg/methods#SparqlQa> , <
https://neverblink.eu/ontologies/llm-kg/methods#Tog> ;
a prov:Entity .
<
https://neverblink.eu/ontologies/llm-kg/methods#CoT>
a <
http://purl.org/spar/fabio/Workflow> ;
rdfs:comment "Chain-of-Thought (CoT) prompting is a technique where LLMs are instructed to generate intermediate reasoning steps before providing a final answer. It is used as a baseline to assess how ODA's KG-driven observation and reasoning compares to step-by-step reasoning within the LLM." ;
rdfs:label "CoT (Chain-of-Thought)" .
<
https://neverblink.eu/ontologies/llm-kg/methods#DirectAnsweringGPT35>
a <
http://purl.org/spar/fabio/Workflow> ;
rdfs:comment "This method serves as a baseline, representing a direct prompting approach using the GPT-3.5 model without explicit external knowledge integration, for comparison against the proposed ODA framework." ;
rdfs:label "Direct answering with GPT-3.5" .
<
https://neverblink.eu/ontologies/llm-kg/methods#DirectAnsweringGPT4>
a <
http://purl.org/spar/fabio/Workflow> ;
rdfs:comment "This method serves as a strong baseline, representing a direct prompting approach using the more advanced GPT-4 model without explicit external knowledge integration, to evaluate the performance gains of ODA." ;
rdfs:label "Direct answering with GPT-4" .
<
https://neverblink.eu/ontologies/llm-kg/methods#Oda>
dct:subject <
https://neverblink.eu/ontologies/llm-kg/categories#SynergizedReasoning> ;
a <
http://purl.org/spar/fabio/Workflow> ;
rdfs:comment "ODA is a novel AI agent framework that synergistically integrates LLMs and KGs for KG-centric tasks, particularly KBQA. It employs a cyclical observation-action-reflection paradigm, where a recursive observation mechanism leverages KG patterns to guide the LLM's reasoning process, addressing the exponential growth of knowledge in KGs." ;
rdfs:label "ODA: Observation-Driven Agent" ;
<
https://neverblink.eu/ontologies/llm-kg/hasTopCategory> <
https://neverblink.eu/ontologies/llm-kg/top-categories#SynergizedLLMKG> .
<
https://neverblink.eu/ontologies/llm-kg/methods#Raco>
a <
http://purl.org/spar/fabio/Workflow> ;
rdfs:comment "RACo (Retrieval-Augmented CoT) is listed as a knowledge-combined method used for benchmarking ODA. It likely enhances Chain-of-Thought reasoning by retrieving relevant information, potentially from KGs, to guide the LLM's thought process." ;
rdfs:label "RACo" .
<
https://neverblink.eu/ontologies/llm-kg/methods#Rag>
a <
http://purl.org/spar/fabio/Workflow> ;
rdfs:comment "RAG (Retrieval-Augmented Generation) is a prominent knowledge-combined model used as a baseline. It integrates information retrieval with text generation, typically by retrieving relevant documents or facts to augment the LLM's input, thereby enhancing its ability to answer questions." ;
rdfs:label "RAG" .
<
https://neverblink.eu/ontologies/llm-kg/methods#Re2G>
a <
http://purl.org/spar/fabio/Workflow> ;
rdfs:comment "Re2G is presented as a knowledge-combined fine-tuned method for comparative evaluation against ODA. This method likely combines reasoning and retrieval aspects to leverage external knowledge for improved performance in natural language tasks." ;
rdfs:label "Re2G" .
<
https://neverblink.eu/ontologies/llm-kg/methods#SelfConsistency>
a <
http://purl.org/spar/fabio/Workflow> ;
rdfs:comment "Self-Consistency is a prompt-based method used as a baseline to evaluate ODA's performance. It aims to improve reasoning by sampling diverse reasoning paths and aggregating their results, demonstrating a common strategy for enhancing LLM output without external knowledge graphs." ;
rdfs:label "Self-Consistency" .
<
https://neverblink.eu/ontologies/llm-kg/methods#SparqlQa>
a <
http://purl.org/spar/fabio/Workflow> ;
rdfs:comment "SPARQL-QA is a knowledge-combined method mentioned as a fine-tuned baseline. This method likely involves generating or executing SPARQL queries against a KG to answer questions, representing an established approach for KG Question Answering." ;
rdfs:label "SPARQL-QA" .
<
https://neverblink.eu/ontologies/llm-kg/methods#Tog>
a <
http://purl.org/spar/fabio/Workflow> ;
rdfs:comment "ToG (Tree-of-Thought Graph) is a method integrating LLMs with KGs to bolster question-answering proficiency. It serves as a key baseline for ODA, allowing for a direct comparison of different LLM-KG integration strategies for complex reasoning tasks." ;
rdfs:label "ToG" .
}