Authors:
(1) Yanpeng Ye, School of Computer Science and Engineering, University of New South Wales, Kensington, NSW, Australia, GreenDynamics Pty. Ltd, Kensington, NSW, Australia, and these authors contributed equally to this work;
(2) Jie Ren, GreenDynamics Pty. Ltd, Kensington, NSW, Australia, Department of Materials Science and Engineering, City University of Hong Kong, Hong Kong, China, and these authors contributed equally to this work;
(3) Shaozhou Wang, GreenDynamics Pty. Ltd, Kensington, NSW, Australia ([email protected]);
(4) Yuwei Wan, GreenDynamics Pty. Ltd, Kensington, NSW, Australia and Department of Linguistics and Translation, City University of Hong Kong, Hong Kong, China;
(5) Imran Razzak, School of Computer Science and Engineering, University of New South Wales, Kensington, NSW, Australia;
(6) Tong Xie, GreenDynamics Pty. Ltd, Kensington, NSW, Australia and School of Photovoltaic and Renewable Energy Engineering, University of New South Wales, Kensington, NSW, Australia ([email protected]);
(7) Wenjie Zhang, School of Computer Science and Engineering, University of New South Wales, Kensington, NSW, Australia ([email protected]).
Editor’s note: This article is part of a broader study. You’re reading Part 5 of 9. Read the rest below.
Table of Links
- Abstract and Introduction
- Methods
- Data preparation and schema design
- LLMs training, evaluation and inference
- Entity resolution
- Knowledge graph construction
- Result
- Discussion
- Conclusion and References
Entity resolution
The quality of KG is crucial for its credibility, we need to further check and correct the inference results before graph construction. For the core label, we divide standardization into two steps: ER-NF/A to ER-N/F. In stage of ER-NF/A, we apply the ChemDataExtractor to derive the Formula and pairs of "Name - Acronym" from the abstract. Subsequently, we utilize the mat2vec model to perform word embedding on entities extracted by both the LLM and ChemDataExtractor. Through comparing their similarities, we aim to correct the core entities to guarantee their precision and the accurate correspondence between "Name" and "Acronym". However, The Name and Formula of materials are often placed in incorrect labels and are difficult to distinguish through ChemDataExtractor. Therefore, we conduct ER-N/F. We selected 2000 correctly classified labels as the training set and used to fine-tune LLM to complete a binary classification problem. Additionally, we selected other 200 labels to evaluate the performance of binary classification.
To normalize entities and relations in other label (ER-OL), we conduct a Word Embedding Clustering to create a expert dictionary. We convert words into vector form using mat2vec and group semantically similar words into the same cluster based on their vector similarity. Specifically, we develop a Density-Based Dynamic Vector Clustering method that dynamically forms clusters based on the similarity of vector representations, without predefining the number of clusters. This approach utilizes a similarity threshold to determine cluster membership, allowing entities to join existing clusters or form new ones based on their proximity in the feature space, effectively adapting to the inherent structure of the data.
By naming each cluster through experts in the field of material science, we extracted a total of approximately 600 vocabulary, including the structure, phase, applications, characterization methods, synthesis methods, properties, and descriptions of most energy materials. Subsequently, we adopt analytic strategies and embed the vocabulary from expert dictionary and the enetities extracted by LLM using mat2vec. After similarity comparison, we finish the standardization and ensure that both entities and relations are correct. It is worth mentioning that the content of "Property" and "Descriptor" is often not limited to the energy field and may cover more information. Therefore, we implement relatively tolerant standardization for these two labels, which only normalize them with matching content but does not delete unmatched entities and relations.
To improve the caliber of the training dataset, we select a small number of high-quality data from the normalized inference output of each iterative process, incorporating this data into the training set. High-quality data is characterized by its high precision and recall metrics. Furthermore, the performance of the fine-tuned LLM, trained on each progressively augmented training set, is rigorously evaluated to ensure optimal effectiveness.
This paper is available on arxiv under CC BY 4.0 DEED license.