14th International Conference on Control, Modelling, Computing and Applications (CMCA 2025)

March 15 ~ 16, 2025, Vienna, Austria

Accepted Papers


Deepfake Detection System Through Collective Intelligence in Public Blockchain Environment

Mustafa Zemin, Department of Information Systems, Middle East Technical University, Ankara, T端rkiye

ABSTRACT

The increasing popularity of deepfake technology is progressively posing a more significant threat to information integrity and security. This paper proposes a Deepfake Detection System that leverages innovative solutions through public blockchain and collective intelligence. This paper uses smart contracts on the Ethereum blockchain to provide a secure, decentralized way of verifying media content, ensuring an auditable and tamper-resistant framework. It integrates concepts of electronic voting to enable a network of participants to assess the authenticity of content through consensus mechanisms. This community-driven model is decentralized, enhancing detection accuracy while preventing single points of failure. Test results prove that the system is robust, reliable, and can scale deepfake detection for sustainable ways of combating digital misinformation. The proposed solution enhances deepfake detection capabilities and provides a framework for applying blockchain-based collaboration in other domains facing similar verification challenges to safely and trustlessly counter digital misinformation.

Keywords

Deepfake Detection, Public Blockchain, Electronic Voting, Collective Intelligence, Ethereum.


Domain Centric Data Sharing and Scaling AI & Advanced Analytics with Decentralized Data Architectures

Meethun Panda1 and Soumyodeep Mukherjee2, 1Associate Partner, Bain & Company, Dubai, UAE, 2Associate Director, Genmab, Avenel - NJ, USA

ABSTRACT

As organizations aim to become data-driven, they face a persistent paradox: centralization enables control and consistency, while decentralization fosters scalability and innovation. This "Data Platform Unification Paradox" creates a dynamic tension that challenges the design and management of modern data platforms. This paper introduces Data Mesh as a solution to this paradox, combining domain-driven decentralization with federated governance to balance these competing demands. Furthermore, it explores how emerging technologies like quantum databases and multi-agent frameworks powered by Large Language Models (LLMs) could revolutionize analytics, emphasizing the need for robust data architectures to support such advancements.

Keywords

Data Mesh, Data Governance, Centralization, Decentralization, Data Paradox, AI, Quantum Databases, LLM Multi-Agent Systems, Distributed Data Platforms, Information Retrieval and AI.


Consilium: Advancing Scientific Research to Public Understanding via Generative AI and Summarization

Hao (William) Liyuan1 and Yu Sun2, 1Boston Latin School, Boston, MA 02115, 2California State Polytechnic University, Pomona, CA 91768

ABSTRACT

The inherent complexity and limited accessibility of scientific research often act as barriers between researchers and the broader public, delaying informed policymaking. Consilium addresses this challenge by employing a Retrieval-Augmented Generation (RAG) model to distill intricate research papers into simplified, actionable policy briefs. This system integrates preprocessing, retrieval, and generation stages, leveraging vector embeddings and large language models to create effective summaries. Our experiments show that Consilium captures semantic insights with high fidelity and textual precision. The tool prioritizes intellectual property safeguards, user customization, and accessible reading levels for non-experts. Identified challenges include runtime efficiency, ethical dilemmas, and multilingual support limitations. Future enhancements aim to improve interactivity, feedback mechanisms, and multilingual applicability, positioning Consilium as an innovative solution for science-society integration.

Keywords

Accessible research, Policy brief creation, RAG systems, and Scientific outreach.


Advancing Retrieval-augmented Generation for Persian: Development of Language Models, Comprehensive Benchmarks, and Best Practices for Optimization

Sara Bourbour Hosseinbeigi1, Sina Asghari2, Mohammad Hossein Shalchian3, Mohammad Amin Abbasi4, 11Department of Industrial and Systems Engineering, Tarbiat Modares University, Tehran, Iran, 2, 4Department of Computer Science, Iran University of Science and Technology, Tehran, Iran, 3Department of Computer Engineering, Sharif University of Technology, Tehran, Iran

ABSTRACT

This study addresses the unique challenges of implementing Retrieval-Augmented Generation (RAG) systems in low-resource languages, focusing on Persians complex morphology and flexible syntax. By developing Persian-specific models—MatinaRoberta (a masked language model) and MatinaSRoberta (a fine-tuned Sentence-BERT)—and establishing a robust benchmarking framework, the research aims to optimize retrieval and generation accuracy. These models were trained on a diverse corpus of 73.11 billion Persian tokens and evaluated using three datasets: general knowledge (PQuad), scientific-specialized texts, and organizational reports. The methodology involved extensive pretraining, fine-tuning with tailored loss functions, and systematic evaluations using both traditional metrics and the RetrievalAugmented Generation Assessment (RAGAS) framework. Key findings indicate that MatinaSRoberta outperformed existing embeddings, achieving superior retrieval accuracy and contextual relevance across datasets. Additionally, temperature tuning, chunk size adjustments, and document summary indexing were explored to refine RAG configurations. Larger models like LLaMA-3.1 (70B) consistently demonstrated the highest generation accuracy, while smaller models faced challenges with domain-specific and formal contexts. The results highlight the potential of tailored embeddings and retrieval-generation configurations for advancing RAG systems in Persian. Implications extend to enhancing NLP applications such as search engines and legal document analysis in low-resource languages. Future research will explore broader applications and further optimization strategies for underrepresented linguistic contexts.

Keywords

Retrieval-Augmented Generation, Large Language Models, Benchmarking, Persian, Sentence Embeddings.


Numerical Simulation of Universal and Adiabatic Quantum Computation by Time Crystal: Proposal of Quantum Time Crystal Computing

Hikaru Wakaura and Andriyan B. Suksmono, Hikaru Wakaura QuantScape Inc., 4-11-18, Manshon-Shimizudai, Meguro, Tokyo, 153-0064, Japan

ABSTRACT

The time crystal is a well-known phenomenon observed in any quantum Floquet system. Time Crystal is a phenomenon in which the states of the system propagate spontaneously, saving time-reversal symmetry. It is one of many body localisations that prolong the lifetime of quantum states. However, it is destroyed by the operation from external environments such as gate operations. Therefore, we propose the method to Control the Time Crystal by modifying the Hamiltonian and driving noise conserving time-reversal symmetry. As a result, the method demonstrated the simple gate operations, Grovers, and Quantum Fourier Transformation algorithms by Time Crystal. We also demonstrated adiabatic Grovers and Shorts algorithm.

Keywords

Quantum computing , Quantum algorithm , Time Crystal.


Knowledge Discovery for Intelligent High-Rise Structure Type Optimization Based on Rough Set Distinguished Matrix Approximation Algorithm

Yuanzheng Zhang, School of Computer Science, University of Southern California, Los Angeles 90007, USA

ABSTRACT

Knowledge acquisition is always the key element to determine the quality of structural intelligent type optimization; rough set is a new knowledge discovery method, and attribute approximation algorithm is the core of the method. Firstly, the process of attribute approximation algorithm based on differentiation matrix is given in the application scenario of intelligent type optimization of high-rise structures; secondly, based on the case set of high-rise structural engineering, the conditional attribute approximation algorithm based on differentiation matrix is established by determining the differentiation matrix, relative D kernel, and D approximation of conditional attributes of the case set; lastly, the established approximation algorithm is utilized to obtain the decision rule knowledge of the intelligent type optimization of the structure. Finally, using the established simplification algorithm, the decision rule knowledge of structural intelligent type optimization is obtained. Practice shows that this method is simple, efficient and theoretically complete, which overcomes the shortcomings of traditional knowledge discovery methods that cannot perform attribute simplification and is inefficient, and provides a new method for structural intelligent type optimization knowledge acquisition.

Keywords

rough sets; distinction matrix; approximation algorithm; high-level structure; intelligent optimization.


Compilation, Optimization, Error Mitigation, and Machine Learning in Quantum Algorithms

Paul Wang, Jianzhou Mao, and Eric Sakk, Department of Computer Science, Morgan State University Baltimore, Maryland 21251. USA

ABSTRACT

This paper discusses the compilation, optimization, and error mitigation of quantum algorithms,essential steps to execute real-world quantum algorithms. Quantum algorithms running on a hybrid platform with QPU and CPU/GPU take advantage of existing high-performance computing power with quantum-enabled exponential speedups. The proposed approximate quantum Fourier transform (AQFT) for quantum algorithm optimization improves the circuit execution on top of an exponential speed-ups the quantum Fourier transform has provided.

Keywords

Transpilation, Optimization, Error Mitigation, quantum machine learning, quantum algorithms.