17th International Conference on Networks & Communications (NeCoM 2025)

March 15 ~ 16, 2025, Vienna, Austria

Accepted Papers


Deepfake Detection System Through Collective Intelligence in Public Blockchain Environment

Mustafa Zemin, Department of Information Systems, Middle East Technical University, Ankara, T端rkiye

ABSTRACT

The increasing popularity of deepfake technology is progressively posing a more significant threat to information integrity and security. This paper proposes a Deepfake Detection System that leverages innovative solutions through public blockchain and collective intelligence. This paper uses smart contracts on the Ethereum blockchain to provide a secure, decentralized way of verifying media content, ensuring an auditable and tamper-resistant framework. It integrates concepts of electronic voting to enable a network of participants to assess the authenticity of content through consensus mechanisms. This community-driven model is decentralized, enhancing detection accuracy while preventing single points of failure. Test results prove that the system is robust, reliable, and can scale deepfake detection for sustainable ways of combating digital misinformation. The proposed solution enhances deepfake detection capabilities and provides a framework for applying blockchain-based collaboration in other domains facing similar verification challenges to safely and trustlessly counter digital misinformation.

Keywords

Deepfake Detection, Public Blockchain, Electronic Voting, Collective Intelligence, Ethereum.


Token Bridges in Blockchains: a Comprehensive Survey of Security Vulnerabilities, Failures, and Interoperability

Ruida Zeng, Brown University, Providence, RI, USA

ABSTRACT

Token bridges are critical enablers of blockchain interoperability, allowing heterogeneous networks to exchange assets, state, and messages seamlessly. Yet, these mechanisms introduce complex security vulnerabilities, as underscored by multi-million dollar exploits and sophisticated attack vectors. Concurrently, non-adversarial failures and misconfigurations have demonstrated how seemingly minor errors can disrupt cross-chain operations. In this survey, we integrate insights from foundational blockchain theory, recent systematization-of-knowledge (SoK) analyses, empirical breach investigations, and post-quantum cryptographic considerations. We focus primarily on the security aspects and failure modes of token bridges while also highlighting new trends in governance, performance trade-offs, and cryptographic innovation. Our synthesis spans a diverse set of references—from early bridging prototypes to advanced cross-chain messaging frameworks and bridging protocols— to guide researchers, blockchain engineers, and policymakers in building secure, resilient, and future-proof cross-chain infrastructures.

Keywords

Blockchain, Token Bridges, Cross-chain Communication, Interoperability, Security Vulnerabilities, Operational Failu.


A Survey of Blockchain Applications in Health Insurance

Yisong Chen1 and Chuqing Zhao2, 1Georgia Institute of Technology, US, 2Harvard University, Cambridge, MA. US

ABSTRACT

This paper systematically analyzes blockchain’s applications in health insurance, evaluating its role in fraud prevention, claims processing, and data security. By integrating blockchain with AI and IoT, the study highlights its potential to enhance transparency, automate workflows, and ensure interoperability among stakeholders. Key findings include improvements in fraud detection, regulatory compliance, and operational efficiency, while challenges such as scalability and regulatory alignment remain critical to adoption. This study underscores blockchain’s transformative potential in creating a secure, efficient, and patient-centric health insurance ecosystem.

Keywords

Blockchain, Health insurance, Fraud detection, Claims processing, Data security, Automation, Interoperability, Smart contracts, Hybrid blockchain, Scalability, Patient-centric systems.


Bolt: A Bitcoin Transaction Latching Mechanism & Token Protocol

Frederick Liam Simon Honohan B.A, B.A.I, M.Sc, OSCP

ABSTRACT

An electronic logic gate is composed of one or more inputs which combine to produce a single true or false output. Similarly a Bitcoin transaction can be composed of multiple inputs however they can create multiple outputs. At runtime, when processed, the computational programs contained in a transaction also, like logic gates, each produce a single binary result but the transaction can only be mined into a block on the chain if all of those programs return true like an AND-gate. This paper introduces a novel method to allow for a transaction to process information from not only its given inputs, but also the inputs & outputs of additional transactions. Also, a complete unbounded ‘Simple Payment Verification’ compatible token protocol which overcomes the ‘Back-To-Genesis’ problem is outlined as another use-case example of the new primitive.


Empowering Ai and Advanced Analytics Through Domain-centric Decentralized Data Architectures

Meethun Panda1 and Soumyodeep Mukherjee2, 1Associate Partner, Bain & Company, Dubai, UAE, 2Associate Director, Genmab, Avenel - NJ, USA

ABSTRACT

As organizations aim to become data-driven, they face a persistent paradox: centralization enables control and consistency, while decentralization fosters scalability and innovation. This "Data Platform Unification Paradox" creates a dynamic tension that challenges the design and management of modern data platforms. This paper introduces Data Mesh as a solution to this paradox, combining domain-driven decentralization with federated governance to balance these competing demands. Furthermore, it explores how emerging technologies like quantum databases and multi-agent frameworks powered by Large Language Models (LLMs) could revolutionize analytics, emphasizing the need for robust data architectures to support such advancements.

Keywords

Data Mesh, Data Governance, Centralization, Decentralization, Data Paradox, AI, Quantum Databases, LLM Multi-Agent Systems, Distributed Data Platforms, Information Retrieval and AI.


Consilium: Advancing Scientific Research to Public Understanding via Generative AI and Summarization

Hao (William) Liyuan1 and Yu Sun2, 1Boston Latin School, Boston, MA 02115, 2California State Polytechnic University, Pomona, CA 91768

ABSTRACT

The inherent complexity and limited accessibility of scientific research often act as barriers between researchers and the broader public, delaying informed policymaking. Consilium addresses this challenge by employing a Retrieval-Augmented Generation (RAG) model to distill intricate research papers into simplified, actionable policy briefs. This system integrates preprocessing, retrieval, and generation stages, leveraging vector embeddings and large language models to create effective summaries. Our experiments show that Consilium captures semantic insights with high fidelity and textual precision. The tool prioritizes intellectual property safeguards, user customization, and accessible reading levels for non-experts. Identified challenges include runtime efficiency, ethical dilemmas, and multilingual support limitations. Future enhancements aim to improve interactivity, feedback mechanisms, and multilingual applicability, positioning Consilium as an innovative solution for science-society integration.

Keywords

Accessible research, Policy brief creation, RAG systems, and Scientific outreach.


Advancing Retrieval-augmented Generation for Persian: Development of Language Models, Comprehensive Benchmarks, and Best Practices for Optimization

Sara Bourbour Hosseinbeigi1, Sina Asghari2, Mohammad Hossein Shalchian3, Mohammad Amin Abbasi4, 11Department of Industrial and Systems Engineering, Tarbiat Modares University, Tehran, Iran, 2, 4Department of Computer Science, Iran University of Science and Technology, Tehran, Iran, 3Department of Computer Engineering, Sharif University of Technology, Tehran, Iran

ABSTRACT

This study addresses the unique challenges of implementing Retrieval-Augmented Generation (RAG) systems in low-resource languages, focusing on Persians complex morphology and flexible syntax. By developing Persian-specific models—MatinaRoberta (a masked language model) and MatinaSRoberta (a fine-tuned Sentence-BERT)—and establishing a robust benchmarking framework, the research aims to optimize retrieval and generation accuracy. These models were trained on a diverse corpus of 73.11 billion Persian tokens and evaluated using three datasets: general knowledge (PQuad), scientific-specialized texts, and organizational reports. The methodology involved extensive pretraining, fine-tuning with tailored loss functions, and systematic evaluations using both traditional metrics and the RetrievalAugmented Generation Assessment (RAGAS) framework. Key findings indicate that MatinaSRoberta outperformed existing embeddings, achieving superior retrieval accuracy and contextual relevance across datasets. Additionally, temperature tuning, chunk size adjustments, and document summary indexing were explored to refine RAG configurations. Larger models like LLaMA-3.1 (70B) consistently demonstrated the highest generation accuracy, while smaller models faced challenges with domain-specific and formal contexts. The results highlight the potential of tailored embeddings and retrieval-generation configurations for advancing RAG systems in Persian. Implications extend to enhancing NLP applications such as search engines and legal document analysis in low-resource languages. Future research will explore broader applications and further optimization strategies for underrepresented linguistic contexts.

Keywords

Retrieval-Augmented Generation, Large Language Models, Benchmarking, Persian, Sentence Embeddings.


How Can Multilingual LM Handle Multiple Languages

Santhosh Kakarla,Satya Subrahmanya Gautama Shastry Bulusu Venkata and Aishwarya Gaddam, Department of Computer Science, George Mason University, Fairfax, Virginia, USA

ABSTRACT

Multilingual language models (MLMs) have significantly improved due to the quick development of natural language processing (NLP) technologies. These models, such as BLOOM-1.7B, are trained on diverse multilingual datasets and hold the promise of bridging linguistic gaps across languages. However, the extent to which these models effectively capture and utilize linguistic knowledge—particularly for low-resource languages—remains an open research question. This project seeks to critically examine the capabilities of MLMs in handling multiple languages by addressing core challenges in multilingual understanding, semantic representation, and cross-lingual knowledge transfer. While multilingual language models show promise across diverse linguistic tasks, a notable performance divide exists. These models excel in languages with abundant resources, yet falter when handling less-represented languages. Furthermore, traditional evaluation methods focusing on complex downstream tasks often fail to provide insights into the specific syntactic and semantic features encoded within the models. This study addresses key limitations in multilingual language models through three primary objectives. First, it evaluates semantic similarity by analyzing whether embeddings of semantically similar words across multiple languages retain consistency, using cosine similarity as a metric. Second, it probes the internal representations of BLOOM-1.7B and Qwen2 through tasks like Named Entity Recognition (NER) and sentence similarity to understand their linguistic structures. Finally, it explores cross-lingual knowledge transfer by examining the models ability to generalize linguistic knowledge from high-resource languages, such as English, to low-resource languages in tasks like sentiment analysis and text classification. The results of this study are expected to provide valuable insights into the strengths and limitations of multilingual models, helping to inform strategies for improving their performance. This project aims to deepen our understanding of how MLMs process, represent and transfer knowledge across languages by focusing on a mix of linguistic probing, performance metrics, and visualizations. Ultimately, this study will contribute to advancing language technologies that can effectively support both high- and low-resource languages, thereby promoting inclusivity in NLP applications.

Keywords

Retrieval-Augmented Generation, Large Language Models, Benchmarking, Persian, Sentence Embeddings.


Prehistory of Artificial Intelligence: Homer and Hesiod in Cyberspace

Giuseppe Veronica, independent researcher

ABSTRACT

Greeks and Latins did not know computers. However, some of them reasoned as if they already existed. A grammarian from the 3rd century, Nonius Marcellus, built himself a database from which to mechanically draw material for his research. And it was perhaps Alcidamas, a 5th-4th century sophist, who tried to make the dead speak, anticipating by many centuries those cyberpunk fantasies that Artificial Intelligence is now preparing to realise.

Keywords

Latin, Greek, Science Fiction, Artificial Intelligence, Database, Algorithm.


A Smart Car Model Education and Social Communication Platform in the Automotive Industry using Virtual Reality and Artificial Intelligence

David Lee1, Moddwyn Andaya2, 1Arcadia High School, 180 Campus Dr, Arcadia, CA 91006, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

The program’s key technologies include the AI program which helps the user create a description for their models, a walking system for the user to navigate the map, and a model importer system to import the models of the user in order for them to view it. I addressed design challenges by adding many unique and specific words to the AI in order for it to come up with distinct descriptions for the user [2]. The walking system was made to be smooth so that the user can have the most realistic experience in the program. The model importer only imports the model in a set location so that it would always be viewable by the user in that location. The design challenges were also addressed by looking at inspiration online for features to add and also different models that can be added in order for the app to become more realistic.

Keywords

AI-generated descriptions, 3D model exploration, Realistic navigation, Model import system.


Climate Change Pulse: AI-Driven Social Media Sentiment Analysis for Disaster Impact

Alan Zheng1, Carlos Gonzalez2, 1West-Windsor Plainsboro High School North, 90 Grovers Mill Rd, Plainsboro Township, NJ 08536, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

Climate change is an urgent global issue, with natural disasters becoming more severe and frequent due to human activities [1]. Understanding public sentiment around these events can inform climate awareness and policy. We developed ClimatePulse, a web-based tool that visualizes natural disasters alongside Twitter data to analyze how proximity and time influence climate-related sentiments [2]. Using the Climate Change Twitter Dataset, we examined over 15 million tweets, mapping them with disaster data through an interactive UI. Challenges included missing geospatial data and sentiment classification limitations, addressed by refining data filters and leveraging embedded tweets. Our experiments tested how distance and time around disasters affect sentiment, revealing that proximity intensifies negative emotions, and climate change deniers exhibit surprisingly strong negative sentiments. Compared to prior methodologies focused on data collection or basic sentiment analysis, our approach emphasizes user interactivity and behavioral analysis. ClimatePulse offers a dynamic way to understand climate discourse, bridging data insights with public engagement.

Keywords

Natural Language Processing, Machine Learning, Sentiment, Disasters.


An Adaptive Chess-based Learning Platform for Children with Accessibility Features using Generative AI and Machine Learning

Qixin Lin1, Owen Miller2, 1Central High School, 1700 W Olney Ave, Philadelphia, PA 19141, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

Chess is a widely played game, but accessibility barriers can prevent many people from fully engaging with it. This project aims to study and create a more accessible chess application that integrates AI features to help with usability. The program features 3 main systems, a speech-to-command system that allows players to make moves using voice input, a text-to-speech system to give audio feedback on board states, and a system for move validation to make sure the AI output is more accurate [1]. To test the system, we did an experiment where users provided voice inputs, and we compared them to what the AI thought they said to look for the most common mistakes in the AI’s output. This showed that most simple commands were accurately processed, but more complex phrases and words that could be spelled in multiple ways gave it more trouble [9]. We were able to use those tests to look for cases of slight errors and adjust for them to improve our speech command’s accuracy. This project helps demonstrate how AI features can enhance accessibility features to promote more inclusive gameplay.

Keywords

Accessible Chess, AI-Powered Usability, Speech-to-Command System, Inclusive Gameplay.


AI-Driven Real-Time Threat Response and Alert System for Women and Vulnerable Groups using KNN, Voice Recognition, and IOT

Irene Lu1, Richard Guo2, 1Arnold O. Beckman High School, 3588 Bryan Ave, Irvine, CA 92602, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

This paper addresses the pressing issue of female safety through the design of a wearable device capable of detecting abnormal movements and recognizing voice commands [1]. Our approach integrates KNN-based motion detection and real-time speech recognition, offering a more reliable and accurate system for emergency detection [2]. Unlike previous methods that rely on computationally intensive models, such as ANNs and SVMs, our solution is optimized for low-power devices, ensuring real-time response without compromising efficiency [3]. The system was tested on various activities and in noisy environments, demonstrating its ability to detect emergency situations reliably. The results highlighted the importance of addressing edge cases, such as slower movements in elderly users and fast actions in athletes, which we plan to refine in future iterations. The combination of motion detection and voice recognition provides a flexible and dynamic safety mechanism, enhancing personal security [4]. Our proposed solution offers a significant step toward affordable and accessible safety technology, with the potential to empower women globally by improving their security in everyday situations.

Keywords

KNN Classifier, Voice Recognition, Datasets, Real-Time


A Gamified Approach to Memory Retention: Enhancing Cognitive Engagement for Alzheimers Prevention

Sherry Huang1, Moddwyn Andaya2, 1The Bishop’s School, 7607 La Jolla Boulevard, La Jolla, CA 92037, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

As the older population grows, so do cases of Alzheimers, a form of dementia that affects memory and cognitive skills [10]. Alzheimers is a leading cause of death in the U.S. and the only one without a cure. Currently, 6.7 million people in the U.S. are living with it, and the number is expected to double by 2050. To help, my app offers memory-boosting games and quizzes designed to keep the brain active. While these games – which are also featured on platforms like HippoCamera, Lumosity, and Game Show – arent a replacement for treatment, they are a valuable supplement. The app features a home page with articles on memory health, a spotlight game, and a memory log where users can add and categorize memories [11]. I tested the AI generating personalized quiz questions and found that 18 out of 20 met key criteria—clarity, no answer hints, and relevance to user input. Moving forward, I plan to add more quizzes, a gamified memory challenge, and a progress report to track user performance. Overall, this app aims to keep memory sharp.

Keywords

Memory Training App, Alzheimers Prevention, Cognitive Engagement, Gamified Brain Exercise.


Numerical Simulation of Universal and Adiabatic Quantum Computation by Time Crystal: Proposal of Quantum Time Crystal Computing

Hikaru Wakaura and Andriyan B. Suksmono, Hikaru Wakaura QuantScape Inc., 4-11-18, Manshon-Shimizudai, Meguro, Tokyo, 153-0064, Japan

ABSTRACT

The time crystal is a well-known phenomenon observed in any quantum Floquet system. Time Crystal is a phenomenon in which the states of the system propagate spontaneously, saving time-reversal symmetry. It is one of many body localisations that prolong the lifetime of quantum states. However, it is destroyed by the operation from external environments such as gate operations. Therefore, we propose the method to Control the Time Crystal by modifying the Hamiltonian and driving noise conserving time-reversal symmetry. As a result, the method demonstrated the simple gate operations, Grovers, and Quantum Fourier Transformation algorithms by Time Crystal. We also demonstrated adiabatic Grovers and Shorts algorithm.

Keywords

Quantum computing , Quantum algorithm , Time Crystal.


Denial-of-Service (DOS) Attack Detection using Deep Learning and Machine Learning Techniques: a Comparative Approach

Bright G. Akwaronwu1 and Innocent U. Akwaronwu2, 1Babcock University, Nigeria, 2The university of Alabama in Huntsville, USA

ABSTRACT

Denial-of-Service (DoS) attacks remain a critical challenge in cybersecurity, disrupting the availability of network services and causing significant operational and economic losses. This research investigates the performance of machine learning (ML) and deep learning (DL) models in detecting DoS attacks, focusing on a comparative analysis to identify the most effective approaches. Ensemble Learning models, such as Random Forest (RF) and Extreme Gradient Boosting (XGB), demonstrated exceptional accuracy and reliability, with RF achieving near-perfect performance across metrics like accuracy (99%), precision (99%), and recall (99%). Deep Learning models, including Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), excelled in capturing complex patterns, with CNN achieving an accuracy of 98% and a perfect AUC score of 1.00. Feature selection using Recursive Feature Elimination (RFE) and data balancing techniques ensured robust model training and evaluation, minimizing overfitting and enhancing generalization. The results highlight RF and CNN as the best-performing models, with RF offering interpretability and computational efficiency, while CNN excels in handling unstructured and complex datasets. This study underscores the need for context-driven model selection and suggests exploring hybrid approaches that integrate the strengths of ML and DL for improved DoS attack detection. Future work should aim to enhance scalability and adaptability for real-world cybersecurity applications.

Keywords

Denial-of-Service (DoS) Attacks, Machine Learning, Deep Learning, Random Forest, Convolutional Neural Network, Feature Selection, Cybersecurity, Model Performance Analysis.


Computing in the Framework of Quantum Set Theory

Miklós Banai, Retired Theoretical Physicist, Budapest, Hungary

ABSTRACT

In this report the results about the computing in terms of Gaisi Takeuti’s quantum set theory is summarised.


Causal Inference in Financial Markets: a Generative AI-powered Web Application for Analyzing Macroeconomic Indicators and Stock Market Data in the Us

Zixuan Zhou1, Carlos Gonzalez2, 1Brandeis University, 415 South St, Waltham, MA 02453, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

We aim to address the challenge of understanding stock market behavior during economic uncertainty by integrating S&P 500 stock data with macroeconomic indicators and leveraging Generative AI [6]. Traditional approaches often focus on technical or fundamental analysis, but few incorporate real-time data or AI-driven insights. Our solution combines Python-based data analysis, OpenAI’s API, and interactive visualizations to create a user-friendly platform for exploring stock trends and generating financial models [7]. Key challenges included ensuring AI accuracy and balancing functionality with simplicity, which we addressed through feedback mechanisms and modular design. The platform allows users to input queries (e.g., “Tesla stock performance”) and receive real-time insights, including text- based responses and visualizations. Preliminary results highlight the platform’s potential to democratize access to financial knowledge and improve decision-making. By combining modern AI technologies with traditional financial analysis, our project offers a versatile tool for investors, researchers, and policymakers, making it a valuable resource for navigating complex financial markets [8].

Keywords

Multimodal Large Language Models for Automated Diagnosis and Clinical Decision Support


Causal Inference in Financial Markets: a Generative AI-powered Web Application for Analyzing Macroeconomic Indicators and Stock Market Data in the Us

Kailash Thiyagarajan, Independent Researcher, Austin, TX, USA

ABSTRACT

Healthcare decision-making relies on diverse data sources, including electronic health records (EHRs), medical imaging, and textual clinical notes. While traditional AI models have demonstrated success in specific tasks such as radiology analysis or clinical text processing, they often lack the ability to integrate multimodal data effectively. This research introduces a Multimodal Large Language Model (M-LLM) that leverages transformer-based architectures to fuse text, images, and structured patient data for enhanced diagnosis and decision support. The proposed model integrates Vision Transformers (ViTs) for medical imaging, pretrained biomedical language models for textual analysis, and a multimodal fusion mechanism to enable holistic medical reasoning. The study utilizes MIMIC-IV (EHR), CheXpert (chest X-rays), and MedQA (medical question answering) datasets to evaluate performance. Results demonstrate that M-LLM outperforms traditional single-modality models, achieving improved accuracy, explainability, and robustness in clinical settings. The findings suggest that multimodal AI approaches can significantly improve healthcare diagnostics, offering real-time support for physicians while addressing data gaps in current AI-driven systems. Future directions include exploring federated learning for privacy-preserving AI, multimodal self-supervised learning, and deployment in clinical workflows.

Keywords

Multimodal Learning, Large Language Models, Clinical Decision Support, Medical Imaging, Vision- Language Models, Healthcare AI, Transformer Models, Biomedical NLP, Explainability, Federated Learning


Empty Symbols in Pin Code or Password - Attempt for Higher Security

Qamil Kllogjeri1 and Adrian Kllogjeri2, 1IT Support Specialist at Global Savings Group, Munich, Bavaria, Germany, 2CStat CSci, Risk Manager at Hyundai Capital UK London, United Kingdom

ABSTRACT

This paper is part of the investigations about the use of the keystroke dynamics biometric as an added security for passwords and PINs (Personal Identification Numbers), banking Automatic Teller Machines (ATM) and other systems [6]. Our proposal is about the use of empty symbols and how to combine the PIN digits with the empty symbols during the execution of the PIN code or password. So far, we have no facts about the efficiency of the use of empty symbols, this is just a proposal based on theoretical statements. Conducting experiment is necessary and it is it that will prove the efficiency of using empty symbols to provide security of higher levels. Based just on theoretical treatment our conviction is that, the use of empty symbols causes confusion to the attackers and increases the security in a satisfactory level or several times higher.

Keywords

keystroke dynamics, PIN or password security, empty symbols, Wilcoxon test, similar metrics.


Knowledge Discovery for Intelligent High-Rise Structure Type Optimization Based on Rough Set Distinguished Matrix Approximation Algorithm

Yuanzheng Zhang, School of Computer Science, University of Southern California, Los Angeles 90007, USA

ABSTRACT

Knowledge acquisition is always the key element to determine the quality of structural intelligent type optimization; rough set is a new knowledge discovery method, and attribute approximation algorithm is the core of the method. Firstly, the process of attribute approximation algorithm based on differentiation matrix is given in the application scenario of intelligent type optimization of high-rise structures; secondly, based on the case set of high-rise structural engineering, the conditional attribute approximation algorithm based on differentiation matrix is established by determining the differentiation matrix, relative D kernel, and D approximation of conditional attributes of the case set; lastly, the established approximation algorithm is utilized to obtain the decision rule knowledge of the intelligent type optimization of the structure. Finally, using the established simplification algorithm, the decision rule knowledge of structural intelligent type optimization is obtained. Practice shows that this method is simple, efficient and theoretically complete, which overcomes the shortcomings of traditional knowledge discovery methods that cannot perform attribute simplification and is inefficient, and provides a new method for structural intelligent type optimization knowledge acquisition.

Keywords

rough sets; distinction matrix; approximation algorithm; high-level structure; intelligent optimization.


Compilation, Optimization, Error Mitigation, and Machine Learning in Quantum Algorithms

Paul Wang, Jianzhou Mao, and Eric Sakk, Department of Computer Science, Morgan State University Baltimore, Maryland 21251. USA

ABSTRACT

This paper discusses the compilation, optimization, and error mitigation of quantum algorithms,essential steps to execute real-world quantum algorithms. Quantum algorithms running on a hybrid platform with QPU and CPU/GPU take advantage of existing high-performance computing power with quantum-enabled exponential speedups. The proposed approximate quantum Fourier transform (AQFT) for quantum algorithm optimization improves the circuit execution on top of an exponential speed-ups the quantum Fourier transform has provided.

Keywords

Transpilation, Optimization, Error Mitigation, quantum machine learning, quantum algorithms.


A Predictive System to Monitor Lithium Carbonate Levels using Machine Learning and Physiological Data

Fuyi Xie1, Carlos Gonzalez2, 1University of California Irvine, Irvine, CA 92697, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

Access to high-quality medical data is critical for research but is often hindered by privacy concerns and logistical challenges. GenDataset addresses this problem by developing a generative AI tool that produces synthetic medical data while preserving privacy and statistical integrity [1]. The tool integrates Kaggle for dataset retrieval, Gretel for synthetic data generation, and Firebase for secure storage, all wrapped in a user-friendly web interface. Key challenges included ensuring data utility, scalability, and ease of use, which were addressed through advanced machine learning models, API integrations, and modular design. Experiments demonstrated the tool’s ability to generate realistic datasets tailored to user specifications, such as demographics and region. GenDataset improves existing methods by balancing privacy, utility, and accessibility, making it a valuable solution for researchers. Its ability to streamline data collection and ensure compliance with privacy regulations positions it as a transformative tool for advancing medical research and data-driven healthcare innovations [2].

Keywords

AI, Firebase, Medical, Machine Learning.


Enhancing Communication for Neurodiverse Individuals: Usability and Accessibility Evaluation of the Cardibly Web Platform

Ryder Wei1, Tyler Boulom2, 1Cate School, 1960 Cate Mesa Rd, Carpinteria, CA 93013, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

This study evaluates Cardibly, a web-based platform designed to improve communication for autistic children, particularly those with limited verbal expression. Traditional communication tools like flashcards often fail to meet the accessibility, customization, and usability needs of neurodiverse individuals, leading to frustration and social isolation [1]. Cardibly addresses these challenges by offering a simple, customizable, and accessible solution that allows caregivers and educators to create personalized communication cards, enabling more effective interactions. Two experiments were conducted to assess the platform’s usability and accessibility. The first experiment focused on customization, revealing an average task completion time of 4.5 minutes, with some errors indicating the need for interface improvements. The second experiment tested accessibility, showing that while 90% of tasks were successfully completed, challenges with dynamic content highlighted the need for better ARIA labeling and focus management [2]. Despite these issues, Cardibly’s potential to enhance communication and independence among neurodiverse individuals was evident, with future improvements including AI-powered conversation suggestions and a sentence creation feature.

Keywords

Neurodiverse Communication, Customizable Flashcards, Accessibility in Web Design, Assistive Technologies, Usability Testing.


Achieving Secure Transaction Gateway with Low Latency using the Hybrid Cryptographic Securexpress Protocol

Lokesh Kumar S B1, Shashannk S1, and Dr. Jannath Nisha O S2, 1School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamilnadu, India, 2Assistant Professor, School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamilnadu, India

ABSTRACT

As digital transactions surge, payment gateways face growing challenges in ensuring security without compromising speed. Addressing these critical needs, this research article presents the SecureXpress Protocol (SXP), a hybrid cryptographic solution meticulously designed to strengthen payment gateway security while minimizing transaction latency. SXP integrates Elliptic Curve Cryptography (ECC) for efficient key exchange, the Advanced Encryption Standard (AES) for uncompromised data confidentiality, the Elliptic Curve Digital Signature Algorithm (ECDSA) for non-repudiation, and Secure Hash Algorithm (SHA-3) for assured data integrity. Through extensive evaluation, SXP has proven its capability to deliver rapid transaction processing with minimal latency, setting a new benchmark for real-time payment systems. In addition to speed, the protocol offers robust protection against diverse security threats, ensuring forward secrecy and resilience to sophisticated, evolving attacks. The seamless balance of speed, security, and efficiency inherent in SXP makes it a compelling choice for modern payment infrastructures, enhancing trustworthiness across digital financial transactions. This work showcases SXP’s potential as a new standard for secure and efficient digital payment solutions.

Keywords

SXP, ECC, AES, SHA-3, ECDSA and Payment Gateway.


Blind Hyperspectral Image Restoration for Unmanned Aerial Vehicle Applications

Victor Sineglazov1 and Kyrylo Lesohorskyi2, 1Department of Aeronavigation, Electronics and Telecommunication, Kyiv Aviation Institute, Kyiv, Ukraine, 2Department of Artificial Intelligence, IASA , National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute", Kyiv, Ukraine

ABSTRACT

This work is devoted to the modification of existing blind image restoration algorithms and methodologies for noise and blur elimination in videos and images captured by unmanned aerial vehicles. This work improves on the existing algorithms and methodologies to address the challenges and limitations of existing tools when applied to high-dimensional hyperspectral data by applying channel compression based on 3d convolutions as a dimensionality reduction method. The methods and algorithms described in this paper can be applied in near-real-time and batch-processing scenarios. A detailed analysis of noise and blur types and their respective sources is provided. An overview of existing methods is given, and their limitations when applied to hyperspectral data are analyzed. A two-stage image restoration approach for hyperspectral data based on is introduced. Proposed algorithms solve the key limitations of hyperspectral data image restoration, providing quality and performance, comparable to non-hyperspectral image restoration.

Keywords

Hyperspectral Imagery, Image Restoration, Recurrent Neural Networks, Unmanned Aerial Vehicles.


An Intelligent Mobile Application to Test Sobriety of the User using Machine Learning

Christopher Feng1, Soroush Mirzaee2, 1Hopkins School, 986 Forest Rd, New Haven, CT 06515, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

This project presents a sobriety detection system aimed at preventing drunk driving through a combination of image recognition, alcohol sensing, and speech evaluation tests. This project uses a Raspberry Pi device, equipped with an alcohol sensor and a camera to assess a user’s sobriety by analyzing facial expressions and blood alcohol content (BAC) [9][10]. Additionally, a vocal test is conducted using the user’s smartphone, with results processed and stored in a cloud database for further analysis. The system offers an affordable, reliable, and user-friendly alternative to traditional methods, such as breathalyzers, by detecting both alcohol and drug impairment. Through a series of experiments, the system demonstrated high accuracy, achieving a 97% success rate when combining all tests, highlighting its potential to reduce driving under the influence of incidents.

Keywords

Drunk Driving, Sobriety, Raspberry Pi, Flutter.


Modeling and Analysis Methods for Early Detection of Leakage Points in Gas Transmission Systems

Ilgar Aliyev1 and FatmaGurbanova2, 1Head of Department, Azerbaijan Architecture and Construction University, Baku, Azerbaijan, 2Masters student, Azerbaijan Architecture and Construction University, Baku, Azerbaijan

ABSTRACT

Early detection of leaks in gas transmission systems is of great importance for ensuring uninterrupted gas supply and optimizing operational costs. For this purpose, the issue of identifying leak locations based on the analysis of unsteady gas flow parameters in gas pipelines has been studied. Within the scope of the research, a model has been proposed that takes into account pressure variations at the inlet and outlet points of the gas pipeline depending on the leak position. The primary objective of the modeling is to ensure the efficient operation of the gas pipeline and minimize the impact of potential accidents. In this regard, a new approach has been developed to determine the minimal time (t = t1), which enables timely leak detection and the implementation of prompt measures.A new model has been introduced for the real-time detection of gas leaks and their integration into control systems.

Keywords

leakage, modeling, optimal, analytical method, operational.


A Smart Public Speaking Training and Feedback Analysis System using Virtual Reality and Machine Learning

Logan Z. Chang1, Anthony Ovando2, 1Portola High School, 1001 Cadence, Irvine, CA 92618, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

I found my project on addressing fears of public speaking, a problem presents for many students and even adults. My program uses VR technology so users can record themselves giving a presentation in a classroom [1]. The program transcribes and grades with the use of ai. The first important system is the whisper ai feature, which uses ai to convert the user’s speech into text [2]. Next, the chatbot grading feature analyses the transcript to output a letter grade and additional feedback. Finally, the overall integration of VR allows users to interact, boosting the app’s realism. This feature tackled a huge design challenge to make the application more realistic and enjoyable. In my experiment, ten presentations of varying levels were presented and graded. Most of the output fell within one letter grade of the prediction, signifying accuracy. To conclude, my solution assesses performance, allowing people to communicate effectively and boost confidence.

Keywords

Public Speaking Anxiety, AI Speech Transcription, VR-Based Presentation Training, Automated Performance Assessment.


Beyond the Wind: Rethinking the Saffir-impson Hurricane Wind Scale

Brandon L. Toliver, The George Washington University, Washington, District of Columbia, USA

ABSTRACT

The Saffir-Simpson Hurricane Wind Scale (SSHWS) has been a cornerstone of hurricane categorization since its development in the 1970s. However, its wind-centric focus excludes critical hazards such as storm surge, rainfall, and storm size—factors often responsible for most of the damage and fatalities. This paper identifies these gaps, evaluates recent catastrophic hurricanes, and introduces the Composite Hurricane Impact Scale (CHIS), which integrates storm surge, rainfall, and wind factors into a unified framework. The CHIS aims to enhance emergency response, public awareness, and disaster preparedness.

Keywords

Hurricane categorization, storm surge, rainfall intensity, Saffir-Simpson, disaster preparedness.


An Immersive Training System for Environmental Chemistry Lab Safety using Virtual Reality and Artificial Intelligence

Yanzuo Zhu1, Moddwyn Andaya2, 1Margaret’s Episcopal School, 31641 La Novia Ave, San Juan Capistrano, CA 92675, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

The app aims to prevent lab-related accidents by allowing students to explore a virtual lab environment before entering a real one [1]. It helps users become familiar with safety procedures and complex equipment through interactive, step-by-step experiments. Before starting an experiment, users must equip safety gear like gloves or goggles, reinforcing essential precautions. The use of virtual reality provides a hands-on experience without real- world risks, helping students feel more prepared and confident. Compared to traditional learning methods like books or videos, the game-based approach engages younger students more effectively, improving knowledge retention. User feedback suggests strong potential, with 75% of students reporting increased confidence in handling lab equipment and 85% recommending the app. To enhance effectiveness, adding more realistic scenarios and task-specific feedback could improve engagement and learning outcomes. Overall, the app demonstrates a positive impact on lab safety awareness and training, with room for further refinement to maximize its educational value [2].

Keywords

Virtual Lab Training, Lab Safety Education, Gamified Learning, Interactive Experiment Simulation