https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ/Head https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ http://www.nanopub.org/nschema#hasAssertion https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ/assertion https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ http://www.nanopub.org/nschema#hasProvenance https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ/provenance https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ http://www.nanopub.org/nschema#hasPublicationInfo https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ/pubinfo https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://www.nanopub.org/nschema#Nanopublication https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ/assertion https://doi.org/10.48550/arXiv.2404.07677 http://purl.org/dc/terms/title ODA: Observation-Driven Agent for integrating LLMs and Knowledge Graphs https://doi.org/10.48550/arXiv.2404.07677 http://purl.org/spar/cito/describes https://neverblink.eu/ontologies/llm-kg/methods#Oda https://doi.org/10.48550/arXiv.2404.07677 http://purl.org/spar/cito/discusses https://neverblink.eu/ontologies/llm-kg/methods#CoT https://doi.org/10.48550/arXiv.2404.07677 http://purl.org/spar/cito/discusses https://neverblink.eu/ontologies/llm-kg/methods#DirectAnsweringGPT35 https://doi.org/10.48550/arXiv.2404.07677 http://purl.org/spar/cito/discusses https://neverblink.eu/ontologies/llm-kg/methods#DirectAnsweringGPT4 https://doi.org/10.48550/arXiv.2404.07677 http://purl.org/spar/cito/discusses https://neverblink.eu/ontologies/llm-kg/methods#Raco https://doi.org/10.48550/arXiv.2404.07677 http://purl.org/spar/cito/discusses https://neverblink.eu/ontologies/llm-kg/methods#Rag https://doi.org/10.48550/arXiv.2404.07677 http://purl.org/spar/cito/discusses https://neverblink.eu/ontologies/llm-kg/methods#Re2G https://doi.org/10.48550/arXiv.2404.07677 http://purl.org/spar/cito/discusses https://neverblink.eu/ontologies/llm-kg/methods#SelfConsistency https://doi.org/10.48550/arXiv.2404.07677 http://purl.org/spar/cito/discusses https://neverblink.eu/ontologies/llm-kg/methods#SparqlQa https://doi.org/10.48550/arXiv.2404.07677 http://purl.org/spar/cito/discusses https://neverblink.eu/ontologies/llm-kg/methods#Tog https://doi.org/10.48550/arXiv.2404.07677 http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://www.w3.org/ns/prov#Entity https://neverblink.eu/ontologies/llm-kg/methods#CoT http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://purl.org/spar/fabio/Workflow https://neverblink.eu/ontologies/llm-kg/methods#CoT http://www.w3.org/2000/01/rdf-schema#comment Chain-of-Thought (CoT) prompting is a technique where LLMs are instructed to generate intermediate reasoning steps before providing a final answer. It is used as a baseline to assess how ODA's KG-driven observation and reasoning compares to step-by-step reasoning within the LLM. https://neverblink.eu/ontologies/llm-kg/methods#CoT http://www.w3.org/2000/01/rdf-schema#label CoT (Chain-of-Thought) https://neverblink.eu/ontologies/llm-kg/methods#DirectAnsweringGPT35 http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://purl.org/spar/fabio/Workflow https://neverblink.eu/ontologies/llm-kg/methods#DirectAnsweringGPT35 http://www.w3.org/2000/01/rdf-schema#comment This method serves as a baseline, representing a direct prompting approach using the GPT-3.5 model without explicit external knowledge integration, for comparison against the proposed ODA framework. https://neverblink.eu/ontologies/llm-kg/methods#DirectAnsweringGPT35 http://www.w3.org/2000/01/rdf-schema#label Direct answering with GPT-3.5 https://neverblink.eu/ontologies/llm-kg/methods#DirectAnsweringGPT4 http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://purl.org/spar/fabio/Workflow https://neverblink.eu/ontologies/llm-kg/methods#DirectAnsweringGPT4 http://www.w3.org/2000/01/rdf-schema#comment This method serves as a strong baseline, representing a direct prompting approach using the more advanced GPT-4 model without explicit external knowledge integration, to evaluate the performance gains of ODA. https://neverblink.eu/ontologies/llm-kg/methods#DirectAnsweringGPT4 http://www.w3.org/2000/01/rdf-schema#label Direct answering with GPT-4 https://neverblink.eu/ontologies/llm-kg/methods#Oda http://purl.org/dc/terms/subject https://neverblink.eu/ontologies/llm-kg/categories#SynergizedReasoning https://neverblink.eu/ontologies/llm-kg/methods#Oda http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://purl.org/spar/fabio/Workflow https://neverblink.eu/ontologies/llm-kg/methods#Oda http://www.w3.org/2000/01/rdf-schema#comment ODA is a novel AI agent framework that synergistically integrates LLMs and KGs for KG-centric tasks, particularly KBQA. It employs a cyclical observation-action-reflection paradigm, where a recursive observation mechanism leverages KG patterns to guide the LLM's reasoning process, addressing the exponential growth of knowledge in KGs. https://neverblink.eu/ontologies/llm-kg/methods#Oda http://www.w3.org/2000/01/rdf-schema#label ODA: Observation-Driven Agent https://neverblink.eu/ontologies/llm-kg/methods#Oda https://neverblink.eu/ontologies/llm-kg/hasTopCategory https://neverblink.eu/ontologies/llm-kg/top-categories#SynergizedLLMKG https://neverblink.eu/ontologies/llm-kg/methods#Raco http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://purl.org/spar/fabio/Workflow https://neverblink.eu/ontologies/llm-kg/methods#Raco http://www.w3.org/2000/01/rdf-schema#comment RACo (Retrieval-Augmented CoT) is listed as a knowledge-combined method used for benchmarking ODA. It likely enhances Chain-of-Thought reasoning by retrieving relevant information, potentially from KGs, to guide the LLM's thought process. https://neverblink.eu/ontologies/llm-kg/methods#Raco http://www.w3.org/2000/01/rdf-schema#label RACo https://neverblink.eu/ontologies/llm-kg/methods#Rag http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://purl.org/spar/fabio/Workflow https://neverblink.eu/ontologies/llm-kg/methods#Rag http://www.w3.org/2000/01/rdf-schema#comment RAG (Retrieval-Augmented Generation) is a prominent knowledge-combined model used as a baseline. It integrates information retrieval with text generation, typically by retrieving relevant documents or facts to augment the LLM's input, thereby enhancing its ability to answer questions. https://neverblink.eu/ontologies/llm-kg/methods#Rag http://www.w3.org/2000/01/rdf-schema#label RAG https://neverblink.eu/ontologies/llm-kg/methods#Re2G http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://purl.org/spar/fabio/Workflow https://neverblink.eu/ontologies/llm-kg/methods#Re2G http://www.w3.org/2000/01/rdf-schema#comment Re2G is presented as a knowledge-combined fine-tuned method for comparative evaluation against ODA. This method likely combines reasoning and retrieval aspects to leverage external knowledge for improved performance in natural language tasks. https://neverblink.eu/ontologies/llm-kg/methods#Re2G http://www.w3.org/2000/01/rdf-schema#label Re2G https://neverblink.eu/ontologies/llm-kg/methods#SelfConsistency http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://purl.org/spar/fabio/Workflow https://neverblink.eu/ontologies/llm-kg/methods#SelfConsistency http://www.w3.org/2000/01/rdf-schema#comment Self-Consistency is a prompt-based method used as a baseline to evaluate ODA's performance. It aims to improve reasoning by sampling diverse reasoning paths and aggregating their results, demonstrating a common strategy for enhancing LLM output without external knowledge graphs. https://neverblink.eu/ontologies/llm-kg/methods#SelfConsistency http://www.w3.org/2000/01/rdf-schema#label Self-Consistency https://neverblink.eu/ontologies/llm-kg/methods#SparqlQa http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://purl.org/spar/fabio/Workflow https://neverblink.eu/ontologies/llm-kg/methods#SparqlQa http://www.w3.org/2000/01/rdf-schema#comment SPARQL-QA is a knowledge-combined method mentioned as a fine-tuned baseline. This method likely involves generating or executing SPARQL queries against a KG to answer questions, representing an established approach for KG Question Answering. https://neverblink.eu/ontologies/llm-kg/methods#SparqlQa http://www.w3.org/2000/01/rdf-schema#label SPARQL-QA https://neverblink.eu/ontologies/llm-kg/methods#Tog http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://purl.org/spar/fabio/Workflow https://neverblink.eu/ontologies/llm-kg/methods#Tog http://www.w3.org/2000/01/rdf-schema#comment ToG (Tree-of-Thought Graph) is a method integrating LLMs with KGs to bolster question-answering proficiency. It serves as a key baseline for ODA, allowing for a direct comparison of different LLM-KG integration strategies for complex reasoning tasks. https://neverblink.eu/ontologies/llm-kg/methods#Tog http://www.w3.org/2000/01/rdf-schema#label ToG https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ/provenance https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ/assertion http://www.w3.org/ns/prov#wasAttributedTo https://neverblink.eu/ontologies/llm-kg/agent https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ/assertion http://www.w3.org/ns/prov#wasDerivedFrom https://doi.org/10.48550/arXiv.2404.07677 https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ/pubinfo https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ http://purl.org/dc/terms/created 2026-03-13T16:03:34.932Z https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ http://purl.org/dc/terms/creator https://neverblink.eu/ontologies/llm-kg/agent https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ http://purl.org/nanopub/x/hasNanopubType https://neverblink.eu/ontologies/llm-kg/PaperAssessmentResult https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ http://purl.org/nanopub/x/supersedes https://w3id.org/np/RAysrgEne51z8K3dI4LRMwq4i8mkwg9-dPydWC4nuIOi0 https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ http://www.w3.org/2000/01/rdf-schema#label LLM-KG assessment for paper 10.48550/arXiv.2404.07677 https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ/sig http://purl.org/nanopub/x/hasAlgorithm RSA https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ/sig http://purl.org/nanopub/x/hasPublicKey MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwNz2QK3SEifno78S7+48zUB0xpTex3mAzW73ZimHqNcdEMU5/apslrGrTHGFAt/Chocgo++r6JQp5ygY7NyJHGWdaIqnt85pjX4PbNfLAvapyUO00qZP34fY61w4eZ9UMtleWEsmZKRtQPyJ8ODl46i/rfPuZlcJGpM9Nmy5mpGWuepqIEvF4a/t7pLVeCEDFSYXT+yaiygt6ynIK5f7TtEDhZpeUf/Q74WhMPJXm4yTU/hqOX4IW+50kWHNArGGZwUaXwzyG6M3Zd6UMModryGkLqS4H/MSE3ZA1Ylnms7BfWLEXhMWlaKi6HRV4nGRDLhxVSi9LSRi3LWKLhNIIQIDAQAB https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ/sig http://purl.org/nanopub/x/hasSignature iUspKUO4uEZ+7PCKYJN7QQzJciLWY4UKHRL6A2DxR1KJy4EbIn1oqGEyvIJnjp8bDgpN7SuvqYGK/qbzpu3E1CkAeJbYD2eKvq8JUOa7aPBjPH2oY4rM+td0BNCO1ZeJS21K+BX1RwHWi6yOGI8rAPEGm8zJfV2tcuZ3Byekm5/3h6+63ysJtPggyg804z6DVguHxaLu134fnHUg9lWw1S/45yfh/sR2XRBBH4ub3w3Rf2kvw3AGoFwRZd3FZ9/6YRaW+1LGyebe5L/IczgxUz6tax8NqLQ5cPn/ZmhNSlAt38WseSeHcSRAZYlDLFYGUxZQ5tnwqDRdODfNEqMVXA== https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ/sig http://purl.org/nanopub/x/hasSignatureTarget https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ https://w3id.org/np/RAANbcUJHsO19gDp8qYoRLuNbdbqY3P2C4ZaGJztExaQQ/sig http://purl.org/nanopub/x/signedBy https://neverblink.eu/ontologies/llm-kg/agent