COMPUTER SCIENCE
Developing a mobile app for rural delivery is one of the most important challenges of our time. Rural residents often face difficulties in adapting to new digital technologies due to the lack of IT infrastructure, small population, and poor internet quality. This article examines the key aspects of implementing such a project, with a particular focus on creating a user-friendly and intuitive app tailored to the needs of rural users. The study highlights both challenges, including infrastructural limitations and digital literacy gaps, and opportunities, such as promoting digital inclusion and meeting the needs of rural residents. The study identifies key factors for successful implementation, such as simplifying navigation, ensuring accessibility, and taking into account the socio-economic characteristics of rural areas. By focusing on an intuitive interface and a clear structure, the project aims to bridge the digital divide, allowing rural users to enjoy modern delivery services without the barriers they face every day.
The use of immersive technologies such as virtual reality (VR) and augmented reality (AR) opens vast opportunities for education by enhancing learning experiences and connecting students and teachers through the digital space. This concept is called Metauniversity. By this term the digitalized interactable copy of the learning environment is meant or a university digital twin in other words. Metauniversity reveals a great potential especially in combination with artificial intelligence technology and tools. This article explores the development of digital universities and the transformation of education and creates our own concept of Metauniversity – an innovative educational environment without borders for learning. It is shown that the concept of Metauniversity is central to this transformation, opening the way to a new era of engaging, accessible and personalized education. As technical challenges are overcome and the potential of these technologies is unleashed, the prospect of creating a global educational ecosystem that is fair, interesting and effective becomes increasingly achievable.
In the oil and gas industry, the collection and purification of sour water during crude gas distillation is a crucial technological process. Optimal control of this process can increase production efficiency, reduce environmental emissions of acids and carbon dioxide, and enable more rational use of natural resources amid the growing demands of the oil and gas industry. This study focuses on the design of a digital twin using Honeywell Unisim Design software. Special attention is given to the development and implementation of a split-range control system within a dynamic model. The developed model has been analyzed using historical data to verify its accuracy. A control strategy has been designed for the C300 industrial controller, along with an HMI interface for operators to integrate the digital twin with the Experion PKS distributed control system. The digital twin unlocks new opportunities for developing and testing control schemes, optimizing parameters, and fine-tuning safety systems. According to customer surveys conducted by Honeywell, the implementation of digital twins can reduce significant expenses for design by more than 10%, shorten development time by 10%, and increase plant productivity by 3–8%.
This article is devoted to the problem of using neural network algorithms for automated analysis of student reviews. In the modern conditions of multidisciplinary educational institutions and online learning platforms, student performance becomes an important indicator of the quality of the educational process and serves as a basis for further adjustments. Classical approaches, such as manual processing and descriptive statistics, are not always able to answer the question of how deeply students’ opinions can be understood and analyzed. Neural network algorithms, in comparison with traditional text processing methods, include Recurrent Neural Networks (RNN), BERT and transformers, which have a larger volume of text information and can use more effective logical approaches to the study of hidden patterns. The article considers approaches to processing and analyzing reviews, stages of developing neural network algorithms and their possible impact on education. The potential of more advanced neural network methods is discussed, including a method of learning on a large amount of data, contextual understanding, as well as a smaller number of data units. The study of the neural network approach also indicates that it is important to pay attention to ethics and explanation. The article, in subsequent parts, came to the conclusion that the use of neural network algorithms helps to optimize the management of educational courses and increase the level of their demand among students and raises the question of further research on this topic.
Predicting thermal comfort in indoor environments is important for improving residents’ well-being, productivity, and energy efficiency. This study explores machine learning approaches, specifically Support Vector Machines (SVM) and Random Forest (RF), to improve thermal comfort prediction. Traditional methods rely on subjective assessments, whereas our approach leverages data-driven models trained on large thermal comfort datasets. The dataset underwent rigorous preprocessing, with 80% used for training and 20% for testing. The integration of the Internet of Things (IoT) further enhances predictive accuracy by enabling adaptive control in smart building systems. A comparative analysis of SVM and RF reveals that while both models effectively capture the complex interactions between environmental parameters and resident comfort, RF demonstrates greater stability and higher accuracy in most scenarios. The paper proposes potential strategies for integrating additional predictive features to further enhance model accuracy, demonstrating the advancement of machine learning in optimizing indoor comfort.
This paper explores the challenges associated with cryptographic key management and highlights the importance of developing efficient protocols to ensure secure and trustworthy key exchange in cryptographic systems. It focuses on the Schnorr digital signature scheme, recognized for its features such as indivisibility, non-repudiation, and resistance to message replay attacks. The study introduces a modified version of the Schnorr scheme, incorporating a non-positional polynomial number system. It outlines the process of generating random numbers for keys and computing necessary values using selected polynomial bases. The implementation of non-positional polynomial number system in the creation of non-traditional digital signature algorithms and key management mechanisms significantly improves both the reliability and performance of cryptographic operations. Furthermore, the paper discusses the potential for adapting the proposed scheme to enhance resistance against quantum computing threats, contributing to the development of quantum-resilient cryptographic solutions.
Modern industrial automation systems generate a large volume of production data during operation, the processing of which by modern artificial intelligence methods allows for timely diagnostics of the condition and prediction of wear of expensive equipment. The article develops an innovative multi-agent system based on neuroimmune-endocrine interaction for diagnostics of industrial equipment. The system consists of agents specializing in data reduction, based on an artificial neural network (ANN), an artificial endocrine algorithm (AEA) and an artificial immune system (AIS). The task of these agents is to reduce the size of the database without losing its information content. Also, predictive agents based on AIS and AEA have been developed, which solve the problem of classifying the condition of equipment based on information obtained after data reduction and predict equipment wear. Experiments were carried out on real industrial data of the oil refinery TengizChevroil LLC. The modeling results showed the prospects of using this approach. The following values of the AUC (Area Under ROC Curve) metric were obtained from 0.86 to 0.90, the throughput of the multi-agent system is 1,000 tasks per second, the prediction time is 1 ms, and the fault tolerance is 100%.
The development of credit scoring is one of the key topics of attention in credit risk management in financial companies. However, a single approach to produce rating cards is frequently worthless since loan products differ in risk and financing time and often there is insufficient information on borrowers. The paper addresses the features of creating score cards for consumer credit, refinancing, small and medium businesses, auto loans, mortgage loans, fintech and P2P lending. Thus, the present work can be considered as the above comparative analysis of the most important elements influencing the probability of default of the borrower in the settlement by segments, together with the consideration of machine learning techniques and the use of alternative data sources that can improve the accuracy of the forecast. Depending on the usual credit product, the analysis lets one create recommendations for choosing the optimal approach of creating scoring cards, so enhancing the accuracy of the borrower’s creditworthiness projection and reducing the degree of default risk.
In this study, a physical model of indoor air temperature change dynamics has been developed, considering heat transfer and convection. The system was modeled in COMSOL Multiphysics and tested in MATLAB, where the influence of external temperature, room area, number of radiator sections and air flow velocity were analyzed. The results showed a strong correlation between room temperature and external temperature (0.92), while weaker dependence was observed on the temperature of a radiator (0.2), height (0.1) and area of the room (0.11). However, number of sections and size of the radiator have the least impact on the room temperature (0.07). Additionally, initial temperature of the room does not have any significant correlation with final room temperature. The correlation, observed in simulations enabled us to develop transfer function of controlled object in MATLAB/Simulink. Nonlinear relay, used in resultant model, is used to turn actuator on and off to control room temperature. The results of the study can be used to create neural network to simulate the physical behavior of the room temperature in different initial conditions.
This paper describes a security framework that uses both blockchain technology and quantum-enhanced anomaly detection. We propose use of blockchain to create an unchangeable record of security events and smart contracts to automatically respond to threats that have been confirmed. A variational quantum circuit (VQC) is the basis for our system's hybrid quantum-classical model. The VQC processes information by turning classical data into quantum states, using parameterized gates to model complicated dependencies, and then measuring the result to classify it. We use a One-vs-Rest (OvR) method to find network attacks like Botnet, Brute Force, and Port Scan. We tested how well it worked in both perfect (noiseless) and simulated noisy quantum environments. The model was 93% accurate without noise and only 92% accurate with noise, which shows that it is strong. We found a major trade-off: the OvR method works well, but it costs a lot of computing power. This indicates that subsequent efforts should concentrate on creating more efficient quantum multiclass classification frameworks.
This research is aimed at using artificial intelligence to improve agricultural practice in Kazakhstan. It focuses on tomato leaf disease detection and fertilizer optimization. Deep learning models – including GoogleNet (InceptionV3), VGG16, ResNet50, MobileNetV2, and a custom Convolutional Neural Network (CNN)–were evaluated for disease detection. GoogleNet had the highest accuracy of 99.72%, which shows its capability to detect tomato leaf diseases. For the optimization of the fertilizer, different machine learning-based models, namely Decision Trees, K-Nearest Neighbors, CNN, Gradient Boosting Decision Tree, LogitBoost were assessed using various PCA features. The CNN model that used six PCA features achieved the best accuracy at 97.58%. This shows how good features can help in prediction. The results show using AI technologies can significantly increase the agricultural productivity and sustainability of Kazakhstan through precise detection of diseases and optimized use of resources. In the future studies, models should be deployed to the real-time system of agriculture and also should be expanded to more crops and conditions.
This article presents a comparative analysis of modern neural network architectures, convolutional neural networks (CNNs) and transformers, for the automatic diagnosis of rice leaf diseases. In the experiments, DenseNet121, ResNet, Vision Transformer (ViT), and MaxViT models were trained and tested, followed by their evaluation in terms of accuracy and computational efficiency. The study was conducted on a large-scale dataset containing real images of healthy and diseased rice leaves, which makes the results highly relevant for agricultural science and practice. The experiments included hyperparameter optimization, application of data augmentation techniques, and the use of loss functions and regularization methods to improve the generalization ability of the models. The evaluation metrics comprised classification accuracy, F1-score, as well as computational efficiency indicators such as prediction time and resource consumption. The results showed that transformer-based models, particularly MaxViT, achieve accuracy of up to 94.10%. This is attributed to their ability to effectively capture both local and global image features through attention mechanisms and deep contextualization. At the same time, CNN architectures such as DenseNet121 and ResNet demonstrate high processing speed and robustness under limited computational resources.
Modern agriculture faces a number of serious challenges, including climate change, soil degradation, water scarcity, biological threats and the negative impact of anthropogenic factors. A special place among these challenges is occupied by field weediness, which requires accurate monitoring and timely response. This study is devoted to the development of a system for automatic recognition and mapping of weeds with high geospatial accuracy based on UAV data. The proposed approach includes the application of computer vision algorithms for weed detection, data augmentation techniques to improve recognition accuracy, and the author’s map splicing method to provide accurate geo-referencing of detected weeds. Experimental tests confirmed the effectiveness of the developed system in the tasks of automatic detection of weeds and creation of geo-referenced maps of their distribution. Implementation of this system will allow agricultural producers to carry out spot treatment of weedy areas, optimize the use of herbicides and increase the efficiency of weed control.
This study focuses on the development and implementation of a route optimization algorithm for unmanned aerial vehicles (UAVs). The goal is to create a system that can be used to automate the process of collecting and analyzing data from UAVs in environmental and agricultural applications. The system is built using Python, QGroundControl, and the MAVLink communication protocol. The developed system aims to optimize UAV routes in order to improve the efficiency of environmental monitoring and mapping tasks. It automates the process of data collection and analysis, allowing for more accurate and timely information about the state of the environment and agricultural land. Results from testing the system demonstrate its high level of effectiveness in real-world scenarios. The conclusions from this study suggest that the proposed route optimization algorithm can be successfully applied to various environmental and agricultural use cases. Further development of the system is proposed to enhance its capabilities and expand its use in other areas.
MATHEMATICAL SCIENCES
This study presents the development of a simulation model for the in-situ leaching (ISL) process of uranium, incorporating the complex kinetics of the dissolution of both tetravalent and hexavalent uranium compounds, as well as the interaction of the leaching solution with the ore-bearing host rock. To determine the reaction rate constants, experimental data were obtained from a flow-through leaching setup using a representative ore sample. Analysis of the resulting uranium extraction curves enabled the identification of the rate constants for the key chemical reactions between the ore constituents and the leaching reagent. These parameters were subsequently used as input for numerical simulations aimed at predicting the temporal dynamics of uranium recovery. The model was validated against field data collected from the Budenovskoye uranium deposit. The comparison between simulated and experimental extraction curves demonstrated strong agreement, thereby confirming the robustness and reliability of the model. The results underscore the model’s potential for practical application in forecasting and optimizing the performance of in-situ leaching operations for uranium recovery.
The structure of computably enumerable equivalence relations under computable reducibility (commonly referred to as ceers) has been actively developed over the past 25 years. A comprehensive survey by Andrews and Sorbi presented numerous structural properties of ceers, most notably investigating the existence of joins and meets in the degree structure of ceers. They divided the structure into two definable parts: dark ceers (ceers without an effective transversal) and light ceers (ceers with an effective transversal). They also showed the existence of an infinite number of minimal dark ceers (modulo equivalence relations with finitely many classes). Minimal dark ceers exhibit the distinctive property that every pair of classes is computably inseparable. Furthermore, the classes of weakly precomplete equivalence relations (i.e. those that lack a computable diagonal functions) are also computably inseparable. In this context, a natural question arises: do minimal dark equivalence relations exist that are not weakly precomplete? This paper provides an affirmative answer to this question. Moreover, we establish the existence of an infinite family of non-weakly precomplete minimal dark ceers that avoids lower cone of a given non-universal ceer. We denote by FC the set of ceers consisting of only finite classes. Andrews, Schweber, Sorbi showed the existence of dark FC equivalences. In this paper, we prove that over any dark FC ceer, there exists an infinite antichain of dark FC ceers.
This article is devoted to the solvability questions of a multipoint boundary value problem for a system of loaded differential equations with a parameter. The investigation is carried out using the Dzhumabaev parameterization method. This allows us to reduce the solving of the original boundary value problem to the solving of a system of algebraic equations and Cauchy problems for ordinary differential equations. Modifications have been introduced into the parametrization method algorithms to reduce the influence of boundary conditions on the convergence of the algorithm. Following the specifics of the parametrization method, sufficient conditions for the existence and uniqueness of a solution to the multipoint boundary value problem for systems of loaded differential equations with a parameter have been established. These conditions also ensure the convergence of the modified algorithms. The research results provide a constructive tool for analyzing the behavior of various models described by boundary value problems for loaded differential equations with parameters. The presented example clearly demonstrates the feasibility and effectiveness of the proposed method.
This scientific paper considers a nonlocal boundary value problem for a certain class of integro-differential equations that include an involutive transformation in their structure. The main focus is on the application of the parameterization method developed and proposed by Professor D. Dzhumabayev, the aim of which is to study the conditions for the existence and uniqueness of solutions for such problems, as well as to determine the spectrum of eigenvalues of the corresponding boundary value problem. As is known from theory, the Cauchy problem for equations involving involutions does not always have a unique solution. To overcome this difficulty, parameters are introduced at the midpoint of the considered interval, and a transformation is performed that ensures the existence of a unique solution to the Cauchy problem. This transformation allows the original nonlocal boundary value problem to be divided into two parts: first, a special Cauchy problem, and second, a system of linear algebraic equations with respect to the introduced parameters. After substituting the solution into the boundary conditions, a system of equations is constructed, the solvability of which depends on the non-degeneracy of the corresponding matrix. In addition, the case of non-uniqueness of the solution is considered, in which the eigenvalues are studied and the paper establishes criteria ensuring the existence of solutions to the initial boundary value problem.
This paper investigates series over Price multiplicative systems with coefficients belonging to the class of sequences of bounded variation. Conditions are obtained for estimating the norm of the sum of such series in weighted Lebesgue spaces. These conditions are formulated in terms of the weight function and the corresponding weight sequence. The methodology relies on techniques of harmonic analysis, the Abel transformation, and the Muckenhoupt criteria for the boundedness of the Hardy operator in weighted Lebesgue spaces. Additionally, discrete three-weight Hardy inequalities are considered, and their applicability to the analyzed series is examined. The main theorems establish a relationship between the variation of the coefficients and the integral characteristics of the weights. The results extend the applicability of known analytical methods to a wider class of functional series and are of interest in harmonic analysis, series theory, and the estimation of solutions to differential equations in functional spaces.
This study presents an innovative approach to predicting the risk of cardiovascular diseases (CVD) based on a comprehensive analysis of clinical, immunological and biochemical markers using mathematical modeling and machine learning methods. The initial data include indicators of humoral and cellular immunity (CD59, CD16, IL-10, CD14, CD19, CD8, CD4, etc.), cytokines and markers of cardiovascular diseases, cytokines and inflammation markers (TNF, GM-CSF, CRP), growth and angiogenesis factors (VEGF, PGF), proteins involved in apoptosis and cytotoxicity (perforin, CD95), as well as indicators of liver function, kidney function, oxidative stress and heart failure (albumin, cystatin C, N-terminal pro-B-type natriuretic peptide (NT-proBNP), superoxide dismutase (SOD), C-reactive protein (CRP), cholinesterase (ChE), cholesterol and glomerular filtration rate (GFR)). Clinical and behavioral risk factors are also taken into account: arterial hypertension (AH), previous myocardial infarction (PMI), coronary artery bypass grafting (CABG) and/or stenting, coronary heart disease (CHD), atrial fibrillation (AF), atrioventricular block (AV block), diabetes mellitus (DM), as well as lifestyle (smoking, alcohol consumption, physical activity level), education, body mass index (BMI). The study included 52 patients aged 65 years and older. Based on the obtained clinical, biochemical and immunological data, a model for predicting the risk of premature cardiovascular aging was developed using mathematical modeling and machine learning methods. The aim of the study was to develop a prognostic model that allows for early detection of a predisposition to the development of CVD and its complications. To solve the forecasting problem, numerical methods of mathematical modeling were used, including the Runge-Kutta, Adams-Bashforth and backward Euler methods, which made it possible to describe the dynamics of changes in biomarkers and patients' condition over time with high accuracy. The greatest association with aging processes was demonstrated by HLA-DR (50%), CD14 (41%) and CD16 (38%). BMI correlated with placental growth factor (37%). Glomerular filtration rate positively correlated with physical activity (47%), while SOD activity negatively correlated with it (48%), which reflects a decrease in antioxidant protection. The obtained results make it possible to increase the accuracy of cardiovascular risk forecasting and to formulate personalized recommendations for the prevention and correction of its development.
This paper studies neighborhoods, weak orthogonality, and almost orthogonality of complete non-algebraic 1-types in weakly ordered minimal (weakly o-minimal) theories. A neighborhood is introduced as a tool to describe the local properties of type realizations and to generalize the notion of algebraic closure within a type. Their use allows us to distinguish between types and to refine the structure of their interaction. We formulate and prove the main properties of neighborhoods. In particular, it was established that . On the basis of these results, we investigate the relationships between weak and almost orthogonality of types. In particular, we obtain criteria describing their equivalence, symmetry, and behavior for various classes of types (irrational, quasisolitary, and quasirational). Thus, the paper contributes to clarifying and developing the concepts of orthogonality in weakly o-minimal theories. It is also shown that for certain classes of weakly o-minimal theories, weak and almost orthogonality coincide. The results obtained provide new tools for analyzing the geometry of types in weakly o-minimal theories and open perspectives for further research on structures of weakly o-minimal type. In addition, the proposed approaches can be used for comparison with more general classes of theories.
PHYSICAL SCIENCES
The paper presents an analytical model for determining the resist contrast in electron lithography with nonuniform deposited energy over the depth, which is typical for low-energy electron exposure. In the classical approach, the contrast is determined from the logarithmic dependence of the residual resist thickness on the exposure dose and assumes the homogeneity of the deposited energy over the layer depth, which leads to an overestimation of the contrast value in the presence of a gradient of the deposited energy. The proposed model takes into account the linear change in the energy profile in the resist, which is neglected in existing generally accepted model, thus allowing us to extract the "true" contrast value reflecting the resist properties under given development conditions. To validate the model, experiments were carried out with an ELP-20 resist 200 nm thick on silicon substrates at an electron beam energy of 5, 15 and 25 keV. Dose wedges were exposed for each energy, followed by development and topography analysis by atomic force microscopy. By fitting the model curves to the experimental dependence of the residual resist thickness on the exposure dose for each electron energy, the values of the contrast and the parameter characterizing the gradient of the deposited energy by depth were calculated. In this case, the contrast remains almost constant when varying the energy of incident electrons and has an average value of γ = 1.67. Thus, the increase in contrast with a decrease in the electron energy observed within the classical approach should be considered as an artifact of the model used. The proposed model is applicable for precision calibration of the processes of forming three-dimensional resist structures using the grayscale lithography.
The morphology of zinc oxide (ZnO) powders synthesised via a modified microwave assisted method under varying heating parameters, as well as by chemical bath deposition, was investigated. Image analysis revealed clear correlations between synthesis parameters and structural features. Increasing the microwave heating time at constant power led to a consistent transformation from loose nanoparticles to dense, well-faceted microstructures. In contrast, reducing heating power slowed crystallisation and agglomeration, preserving a finer, more porous structure. Scanning electron microscopy also demonstrated significant morphological differences in samples grown by chemical bath deposition, which were strongly influenced by the initial molar concentration of zinc acetate while keeping the concentrations of other solution components constant. These findings confirm that low-cost, environmentally friendly synthesis approaches can be used to control ZnO particle morphology through careful adjustment of precursor concentrations, heating time, and microwave power. Photocatalytic degradation tests of rhodamine B demonstrated a strong link between particle morphology and degradation rate. The highest rate (~0.5 h– ¹) was recorded for a chemically precipitated sample, whereas the lowest (~0.1 h–¹) corresponded to a microwavesynthesised sample.
In this paper, the problem of synthesis of a hybrid nanostructure based on WS2@MXene and a comprehensive study of its structural properties is considered. The WS2@MXene material was obtained using a two-stage method. First, the Ti3AlC2 MAX phase was treated with hydrofluoric acid to obtain Ti3C2 MXene layered material, then WS2 nanostructures were synthesized by hydrothermal method and the two components were combined by ultrasonic mixing. The samples obtained as a result of the synthesis were studied using X-ray diffraction, scanning electron microscopy, and X-ray photoelectron spectroscopy. X-ray diffraction data confirmed an increase in the hexagonal structure of the WS2 phase and the MXene interlayer space. The morphology images clearly showed the WS2 petals embedded in MXene leaflets, and the results of the chemical bond study clarified the necessary elemental composition of the hybrid. The synthesized material has a hierarchical structure consisting of complex nanotubes and nanowires. This structure provides a high specific surface area, efficient electron transfer, and an increase in the number of active catalytic media. The WS2@MXene hybrid is a promising material for energy storage systems, hydrogen evolution reactions, and gas detection sensors.
The synthesis of graphene nanostructures and their tribological properties for applications in space technologies are discussed in this article. The results of experiments on graphene synthesis using the chemical vapor deposition (CVD) method are presented. A series of experiments aimed at determining the optimal technological parameters for obtaining high-quality graphene identified the optimal synthesis conditions: temperature (1000 °C) and gas ratio (0.5/200 cm³/min). The scanning electron microscopy (SEM) results confirm the high quality of the synthesized nanostructures and their uniform distribution. Raman spectroscopy, performed to assess the quality of the nanostructures, established that the synthesized nanostructures are graphene with an I2D/IG ratio of 1.89. The analysis showed that the obtained graphene has a monolayer structure. Tribological experiments demonstrated that the graphene coating significantly reduces the coefficient of friction compared to conventional steel. The obtained results confirmed the high efficiency of the graphene coating both when using lubricants and under dry friction conditions. This opens up opportunities for improving the performance of tribological systems and extending their service life. Further research will focus on improving synthesis methods and evaluating the strength characteristics of graphene nanostructures under space operation conditions.
OIL AND GAS ENGINEERING, GEOLOGY
The problem of optimizing oil production has always been one of the most pressing. The article is focused on the problem of improving the energy efficiency and optimizing the operating modes of this unit for oil production using rod pumping units (SHS). The article pays special attention to accounting for a decrease in the level of oil in the well, which affects the hydrostatic pressure and the load on the pump. The results of kinematic and kinetostatic analyzes of the transforming mechanism of the rod pump unit Drive are obtained. Based on the same results, the wattmetrogram was calculated. With its help, it allows not only to control energy consumption, but also to finetune the balance mechanisms and drive systems to increase the overall efficiency and identify its weaknesses. The decrease in the oil level reduces the overall load and, accordingly, we see that the amplitude of the wattmetrogram decreases. The oil dynamogram adds dynamic vibrations to the load, complicating the power profile. Together, the two factors make the reports more realistic, which is important for the correct selection and adjustment of counterweights, engine power, and optimization of the operation of the MSS. The results in the article are obtained taking into account the characteristics of the Electromotive and reducer used in the straight-line guide converter mechanism of a Class II quadruple drive of a specific SHS unbalanced drive.
This study investigates the origin of oil from the south-eastern Precaspian Basin through biomarker analysis of five crude oil samples using gas chromatography-mass spectrometry (GC-MS). Biomarkers serve as molecular fingerprints to elucidate hydrocarbon origin, source rock characteristics, and thermal history. The analyzed samples revealed: a marine depositional environment (C₂₆/C₂₅ terpane ratios = 0.59–0.79), carbonate-dominated source rocks (C₂₉/C₃₀ hopane ratios = 4.78–5.55), sourcing from Paleozoic (Permo-Carboniferous) strata (C₂₈/C₂₉ sterane ratios = 0.44–0.57), and peak oil window – late maturation according to Ts/T, C29Ts/ C29Tm, C₂₉ sterane isomerization ratios. These findings demonstrate the basin's complex hydrocarbon generation history, with biomarker distributions indicating: marine organic matter input under anoxic conditions, carbonate-evaporitic source facies and thermal equilibrium consistent with primary oil generation. The results provide valuable insights for exploration strategies in similar frontier basins, with implications for reducing exploration risk and optimizing resource development. Future studies should combine these geochemical data with structural and stratigraphic analyses to refine migration models.
This survey presents a comprehensive investigation of the ore component content in rocks and its relationship with the mineralogical composition. The aim of the research was to identify patterns in the distribution of ore elements, particularly copper, depending on the composition of primary and secondary minerals. The work is based on an integrated analysis involving both petrographic and mineralogical research methods. As part of the study, a detailed examination of rock samples taken from borehole core was carried out. Particular attention was given to the microscopic analysis of thin sections, which made it possible to determine the textural and structural features of the rocks and perform a quantitative assessment of the content of primary and secondary minerals. To improve accuracy, geochemical analysis methods were used to refine the content of ore components and determine the percentage of various minerals in the rocks. The results showed that copper content can vary significantly depending on the degree and type of secondary alteration, as well as the presence of specific minerals such as chalcopyrite, secondary quartz, chlorite, and sericite. To identify and quantitatively assess the relationships between copper content and mineralogical composition, Pearson’s correlation coefficient was applied. This made it possible to establish statistically significant correlations between certain minerals and copper concentration. Positive correlations between copper content and specific minerals were identified, as well as negative correlations with minerals formed during intense alteration processes. The obtained results have practical significance for geological exploration. The revealed correlation relationships can be used during core description and in the development of predictive models of ore-bearing potential, which significantly increases the efficiency of exploration work. The methodology applied in this research can also be adapted for other deposits with similar mineralization conditions.
This study investigates the mineralogical and spatial characteristics of hydrothermal alteration at the Bozshakol porphyry copper deposit within the Central Asian Orogenic Belt (CAOB) in Kazakhstan. Geological and geophysical exploration conducted from 2018 to 2024 provided an extensive dataset comprising approximately 24,000 meters of drill core samples collected at two-meter intervals. Analytical methods included short-wave infrared (SWIR) spectroscopy using Arcspectro FT-NIR Rocket (900–2600 nm) and TerraSpec4 spectrometer (350–2500 nm), complemented by magnetic susceptibility measurements utilizing a KT-10 kappameter. Spectral data interpretation employed The Spectral Geologist (TSG) software, integrating automated mineral identification (TSA algorithm) and manual validation (Aux Match). This integrated approach precisely mapped hydrothermal alteration zones, delineating potassic, phyllic, and propylitic facies, each defined by distinct mineral assemblages and magnetic signatures. Potassic alteration, located centrally, features secondary biotite and K-feldspar. This transitions outward into a phyllic halo characterized by pervasive sericitization and reduced magnetite content. The peripheral propylitic zone displays abundant chlorite, epidote, carbonate minerals, and elevated magnetic susceptibility due to magnetite preservation. A key outcome was identifying zones rich in chlorite and epidote, known to adversely affect flotation recovery rates, thus impacting ore-processing efficiency. Leapfrog Geo software facilitated 3D modeling, enhancing visualization and structural interpretation of alteration domains. This comprehensive characterization significantly improves geological understanding and supports optimized exploration and processing strategies, demonstrating best practices in applying modern spectroscopic and geophysical methods to porphyry copper deposits.
The article presents the results of a comprehensive analysis of geological and geophysical data from a field located in the southern part of the Pre-Caspian sedimentary basin, characterized by complex fault-block structures, active halogenesis processes, and high lithological–stratigraphic variability. The study utilized drilling data, including core analysis, spectral gamma-ray logging, neutron–acoustic surveys, results of petrophysical interpretation, as well as refined re-interpretation of 3D seismic data. Integration of diverse information enabled the construction of a detailed three-dimensional geological model of productive horizons (Lower Albian, Aptian, and Neocomian formations), incorporating structural, lithological, and petrophysical modeling. Five structural segments were identified, the fault system was refined, and the spatial distribution of facies and reservoir zones with commercial potential was determined. Reservoir properties vary: effective porosity ranges from 13–19%, water saturation from 24–36%. The use of indicator modeling and Gaussian simulation improved the accuracy of the geological concept reproduction and the model’s consistency with actual data. The results provide a more precise assessment of hydrocarbon accumulation geometry, reserves distribution, and reservoir parameters, thus reducing geological risks and uncertainties. The developed model serves as the basis for hydrodynamic modeling, designing efficient development systems, and optimizing field operations.
ECONOMY AND BUSINESS
Artificial intelligence (AI) technologies such as machine learning, predictive analytics, and natural language processing are increasingly being integrated into project workflows in organizations. However, while AI improves efficiency, automation, and decision making, many organizations struggle with technology infrastructure, workforce readiness, and regulatory compliance. The purpose of this study is to review and assess the critical success factors (CSFs) that influence the implementation of AI technology in project management. Based on the methodology of bibliometric analysis and expert assessment, advanced research on the topic, key development areas in the field were examined, and CSFs that contribute to the effective implementation of AI technology were identified. The findings indicate that successful integration of AI requires senior management support, strong leadership, organizational agility, workforce competence, and technology readiness. The results will be useful for project managers in organizations planning to implement AI to improve the efficiency of their workflows and projects. The application of the recommended list of 6 CFUs in the project management system will allow organizations to most adaptively and smoothly transition from traditional project management methods to AI-based methods.
Efficient project planning and execution require to implement new methods and techniques instead of traditional ones. Artificial intelligence (AI) driven tools can allocate better project resources and perform project more efficient. The given study explores the role of the AI implementation in project management (PM). Bibliometric analysis is done using VOSviewer and CiteSpace software tools. The research has analyzed academic papers published 2010 to 2025 using Web of Science database. Results of research has shown a significant growth in publications on the topic of the use of AI in PM. AI tools are often used for resource optimization, risk prediction and cost estimation in PM. Keywords analyses have shown the growing importance of fields like machine learning, big data and neural networks. The study also highlights main benefits and challenges in the use of AI tools in PM and the growing interest of the scientific community in this topic.
ISSN 2959-8109 (Online)