Current issue
COMPUTER SCIENCE
This article discusses the process of developing a Kazakh sign language recognition system using the MediaPipe platform. The platform allows for efficient real-time gesture recognition. The main focus is on creating models for gesture recognition, training neural networks, and integrating with the MediaPipe platform. One of the key aspects is achieving high accuracy and speed in gesture processing by using neural network architecture. The system was trained on a large dataset of annotated gestures, which significantly improved the recognition quality. For recognizing Kazakh sign language gestures, an LSTM neural network was used because it effectively works with time series and data sequences. The model was trained on 30 Kazakh sign language gestures, enabling the conversion of gestures into text in real-time. This approach greatly facilitates communication with people who have hearing and speech impairments and contributes to increased inclusivity. Additionally, a user-friendly web interface was developed, allowing easy integration of the neural network with applications for gesture recognition. One of the key aspects of the work is improving data annotation and processing methods to enhance recognition accuracy. The future development of the system includes expanding the sign language gesture database and integration with web applications. This will improve social inclusion for people with hearing and speech impairments and create a broad, accessible platform.
In the field of speech recognition, end-to-end models are gradually replacing traditional and hybrid approaches. Their main principle is autoregressive decoding, where the output sequence is formed from left to right. However, it has not yet been proven that this method provides the best results in converting speech to text. Moreover, end-toend models rely solely on the previous context, which complicates the processing of unclear or distorted sounds. In this regard, the insertion method was proposed, which does not use autoregressive decoding and generates output data in an arbitrary order. This paper examines a Kazakh speech recognition model trained using the insertion method and Connectionist Temporal Classification (CTC). The experiments conducted showed that this method improves recognition accuracy. Unlike autoregressive models, the Insertion method provides greater flexibility in processing sequences, as it does not require a strict order for generating output data. This reduces decoding delays and makes the model more robust to poorly pronounced words. Furthermore, combining the Insertion method with CTC improves the alignment of audio data and text transcription. This is especially important for agglutinative languages such as Kazakh. According to the experimental results, the recognition accuracy of the proposed model reached 10.2%, making it competitive today.
This article presents a hybrid machine learning model designed for soil type classification based on the analysis of geophysical characteristics. The proposed model combines two algorithms – RandomForestClassifier and MLPClassifier – integrating the high accuracy of ensemble methods with the ability of neural networks to capture complex nonlinear dependencies between parameters. The input dataset included indicators such as electrical conductivity, density, P-wave propagation velocity, and burial depth. Prior to training, data preprocessing was performed, including outlier removal, standardization, and categorical feature encoding. The hybrid architecture allowed the integration of results from both models with different weights, optimizing classification accuracy. The effectiveness of the proposed approach was compared with alternative algorithms such as XGBoost and Keras using metrics including Accuracy, F1-score, Precision, and Recall. The hybrid model achieved an accuracy of 96.07%, outperforming individual algorithms. Visualization of confusion matrices provided insights into class distribution and model robustness. The results confirm that combining ensemble and neural methods ensures more stable and reliable predictions when working with geophysical data. The developed model can be effectively applied in geotechnical studies, construction, agriculture, and environmental monitoring, enhancing analytical efficiency and reducing the need for costly laboratory testing.
The article comprises issues of cyber-physical security of wireless sensor networks (WSN). Modern WSNs are vulnerable to a wide class of attacks, such as sinkhole and wormhole attacks, man-in-the-middle attacks, data substitution attacks, universal network attacks, etc. In practice, the ability to protect WSNs from this type of attacks is hampered by the variety of possible impacts, highly specialized focus of the infrastructure and limited resources of network nodes. This article proposes a comprehensive technique for identifying security incidents in WSNs for effective attack detection and incident response, which will minimize potential damage and ensure uninterrupted network operation. The novelty of the technique includes its complexity, the ability to identify various cyber-physical threats and ensure high accuracy and completeness of incident detection, taking into account the distributed structure and dynamics of changes in the composition of WSN nodes. The technique has been tested on a WSN fragment operating on the ZigBee protocol to monitor the characteristics of the atmospheric air of an industrial facility or a city. The developed technique will help improve the quality and timeliness of detecting security incidents in wireless sensor networks, which will enhance the resilience of networks to external and internal malicious influences and prevent long-term interruptions in the operation of the infrastructure in the event of successful attacks.
Given the sophisticated technologies that modern industrial organizations are equipped with, monitoring and diagnostics of equipment condition are critical tasks. The current study aims to develop an improved diagnostic system for industrial equipment in the oil and gas industry using Schneider Electric M241 and M340 programmable logic controllers (PLCs). The first step in this process is to analyze the faults that occur during equipment operation, as well as to study the signal processing methods used in the oil and gas industry. The second step is to use PLCs for automated data collection, parameter monitoring and diagnostics of equipment condition. This approach allows for real-time control of key technological processes, reducing the probability of failures and increasing the reliability of production equipment. The study examined the impact of various data processing strategies on the efficiency of industrial equipment diagnostics. PLC data collection and analysis methods were considered, including continuous parameter monitoring, threshold control and trigger events. Based on these methods, diagnostic algorithms were developed and implemented in the EcoStruxure Machine Expert and EcoStruxure Control Expert, which provide automatic fault detection and alarm generation.
The demand for intrusion detection systems (IDSs) that can promptly identify both known and new types of attacks is on the rise due to the rapid expansion of cyber threats and the consequent increase in network traffic. The utilization of machine learning techniques to autonomously analyze the behavior of network packets and classify them as normal or malicious is a promising way to address this issue. The objective of this investigation is to assess the suitability of a variety of machine learning algorithms for the resolution of network security issues by employing network data analysis as an illustration. This investigation assesses the efficacy of machine learning models in detecting network intrusions using the UNSW-NB15 dataset. This study’s primary objective is to assess the effectiveness of various machine learning models, including Random Forest, K-Nearest Neighbors (KNN), Support Vector Machine (SVM), XGBoost, LightGBM, and Logistic Regression, in network security applications. According to the analysis, all models exhibited high classification accuracy; however, the LightGBM model attained the most remarkable results. This model exhibited the highest values of Accuracy (95.86%), Precision (96.02%), and F1-measure (96.99%), confirming its capacity to effectively manage complex and heterogeneous data. Overall, the study underscores the significance of selecting the most appropriate model based on the security system’s objectives and the specifics of the data.
The study examines the problem of intraday forecasting of the EUR/USD currency pair using various neural network architectures, in particular models integrating attention mechanisms. Three neural network architectures were studied: the basic LSTM model, the LSTM model with the Bahdanau attention mechanism, and the Transformer model with the self-attention mechanism. The experiment was conducted on historical minute data for the period from January 2020 to December 2022. The results showed that attentional models are significantly superior to the basic LSTM architecture. The best results were obtained by the Transformer model (MSE=0.185, MAE=0.297, RMSE=0.431, MAPE=7.3%). A detailed analysis confirmed the stability and accuracy of the Transformer model. The identified advantages of attention models justify their prospects for use in algorithmic trading and require further research to optimize and adapt to real trading conditions. In particular, further research may be aimed at integrating attention models with trading strategies and risk management systems, as well as studying their behavior in the face of sudden changes in market volatility. In addition, it is proposed to explore the possibilities of combining attention architectures with other forecasting methods to increase the overall stability and reliability of forecasts in practical trading.
As cyber threats become more complex, traditional vulnerability detection methods lose their effectiveness. The purpose of this work is to develop and test an approach to identifying vulnerabilities based on the analysis of data from thematic Internet resources: forums, blogs and social networks. These sources contain a large amount of unstructured information, which requires the use of data mining methods. The work uses the integration of modern technologies: the pre-trained SecBERT language model (Security Bidirectional Encoder Representations from Transformers), designed for cybersecurity tasks, and the adaptive neuro-fuzzy inference system DENFIS (Dynamic Evolving Neural-Fuzzy Inference System). The proposed system allows you to filter irrelevant messages, highlight indicators of compromise and potential threats. The use of fuzzy logic makes it possible to efficiently process vague and incomplete information. Experiments confirmed high classification accuracy and stable fuzzy clustering performance (FPC = 0.93; PE = 0.28; XB = 0.042). The system demonstrated the ability to promptly detect signs of cyber threats and has scalability potential for monitoring and attack prediction tasks. The results indicate its potential in increasing the speed of response to cyber threats and strengthening the protection of information systems.
This work is intended to study methods for pre-processing and analysis of fundus images for the detection of diabetic retinopathy. Diabetic retinopathy (DR) is a common eye disease in patients with diabetes, and its early diagnosis allows you to prevent vision loss. During the study, modern methods for processing and analyzing fundus images were used, including the EfficientNetB0 architecture based on Deep Learning. Image augmentation (rotation, scaling, cropping, contrast enhancement) and normalization methods were introduced for pre-processing. When using the EfficientNetB0 architecture, two approaches were tested: training the base layers and additional adaptation (fine-tuning) by opening the upper layers. The results were evaluated by metrics. The precision for the test set in the first method was 65%, and for the second method 75%. The accuracy of the validation set in the first method was 63%, and in the second method it reached 71%. The recall metric showed 60% for the test set in the first method, and 74% in the second method. In general, the fine-tuning method showed high performance. The use of these methods allows to improve the quality of image processing and classification for effective diagnosis of diabetic retinopathy.
The novelty of the study is the analysis of various methods of using and adapting the highly efficient EfficientNetB0 architecture. The results obtained allow to improve the quality of automated systems in DR diagnostics and increase the energy efficiency of the model. The proposed methods have high potential for early detection of eye diseases.
This paper presents an automated method for generating the parameters of linear functions used in the diffusion layer of block symmetric encryption algorithms. The focus is on designing linear layers constructed solely from cyclic shift operations and bitwise XORs, which are both efficient and hardware-friendly. Such layers play a critical role in achieving strong diffusion, a fundamental cryptographic requirement. The proposed method evaluates candidate configurations by exhaustively enumerating shift values, calculating their branch number, and assessing their avalanche characteristics. A set of quantitative diffusion metrics is introduced to guide the selection process, including single- and multi-round avalanche effects and activation rates at the byte level. An aggregated quality function is formulated to allow comparative assessment. The developed software tool identified optimal shift parameters for 128-bit blocks processed as four 32-bit words, achieving a branch number of 5 with only 12 XOR operations. The proposed approach contributes to the practical synthesis of lightweight and secure cryptographic primitives suitable for both classical and constrained platforms.
In the numerical solution of partial differential equations that describe complex physical processes such as heat conduction and gas dynamics, substantial computational resources are often required. To address these challenges, Physics-Informed Neural Networks (PINNs) have gained increasing attention in recent years within the fields of science and engineering. This paper investigates the application of the PINN methodology to obtain solutions to the heat conduction and gas dynamics equations. Unlike traditional numerical approaches, the physics-informed neural network framework incorporates governing physical laws directly into the neural network architecture. Consequently, the solution is constrained not only by data but also by the underlying differential equations. The paper presents the architecture of the PINN framework and details the structure of loss functions, demonstrating their relationship with the heat equation and the Euler equations using specific examples. Furthermore, the implementation of initial and boundary conditions is discussed, along with an analysis of factors influencing the stability and accuracy of the obtained solutions. The results highlight the efficiency of PINNs and demonstrate their potential for solving complex multiphase and high-dimensional problems in the future. Additionally, current research directions aimed at accelerating the computational process and enhancing the robustness of PINNs are outlined.
Technologies for automatic processing of sign language have become an urgent need for members of society with hearing and speech impairments who face inequality in the era of digital transformation. In recent years, the issue of considering sign language as a formal structure equal to natural language and adapting it to automatic systems has attracted increasing attention from researchers. To perform the task of automatically translating information from natural language into sign language, glosses, which are the textual representation of sign language, are used as an intermediate layer. For this purpose, this study proposes a new method for converting Kazakh language text, which reflects the morphological features of the Kazakh language, into sign language glosses using natural language processing techniques. In particular, a Seq2Seq architecture based on the ByT5 small model is applied. The obtained results demonstrate that the generated gloss sequences are compact and semantically rich while preserving the internal structure of sign language. The gloss sequence makes it possible to automate the work of an interpretable intermediate layer that represents sign language movements as logical units similar to written language. The transformed gloss sequence preserves the structure of sign language, reduces redundancy, and improves sentence coherence. Thus, the use of only semantically meaningful units to control sign language avatars reduces computational requirements. Short and semantically rich glosses serve as an effective resource for synthesizing hand movements in sign language.
The research introduces a dual deep learning system which predicts salary ranges by processing job descriptions through BERT-based contextual embeddings and structured metadata integration. The proposed method utilizes more than 124,000 LinkedIn job postings to merge BERT-based contextual embeddings with structured information about location and industry and experience level and compensation type. The model uses multi-head attention to identify essential salary-related terms in job descriptions which results in better model interpretability and improved prediction accuracy. The model combines semantic embeddings with tabular data to create a multimodal representation which serves as input for supervised learning with an ordinal-aware loss function. The model achieves stable performance in salary classification across three categories through F1-scores between 0.82 and 0.84. The proposed model achieves excellent generalization capabilities for different sectors and job types while providing precise predictions and clear decision-making processes for salary benchmarking and recruitment analytics applications.
Agriculture is becoming increasingly demanding due to climate change challenges, necessitating continuous monitoring and changes, including soil assessment for precision agricultural requirements. Agricultural soils are heavily utilized by farmers through the application of pesticides and nitrate phosphates to enhance yield. The exacerbation of flood-drought conditions is resulting in soil irregularity, necessitating meticulous soil monitoring at each location. Soil monitoring is prohibitively costly for numerous farmers. To address this issue, the implementation of a compact, energy-efficient, low-cost mobile robotic platform equipped with various sensors for soil monitoring would be prudent. Farmers can remotely manage, analyze surface upper soil strata, and examine topography. The relevance to research activities and active recreation may result in a low cost for a series of behaviors that enhance comprehension of the examined environmental details. The specialized three-wheeled mobility platform is a novel apparatus engineered for autonomous navigation and task execution. The robot’s three-wheel design confers exceptional mobility and stability, enabling effective operation in restricted areas and across various terrains. It is outfitted with sensors and a control system that guarantees accurate navigational control and obstacle evasion. The programming and modification features enable the robot to be tailored for specialized functions, including data collecting, small load transfer, and environmental monitoring. The robot is applicable for educational, scientific, industrial, and domestic uses. Consequently, the three-wheeled mobile robot serves as a versatile and promising platform for the advancement of contemporary robotic systems.
This study proves that the need to assess and enhance urban sustainability becomes vital. The implications are relevant to the constant urbanization and technological progress, which calls for sustainable smart city development. On the grounds presented above, we offer and describe the model to measure and evaluate urban sustainability through SCIs. Calculations performed on the model differ across the total of 102 cities worldwide, ensuring quick and effective calculation and visualization. The model on which calculations are based uses six indicators, which are mobility, environment, government, economy, smart people, and smart living. The rationale provides an opportunity to analyze and interpret data regarding the indicators, which makes the calculation of SCI (Smart City Index) by city for each indicator possible, as well as conclusions concerning major strengths and focus areas for improvement. The obtained results are presented through promising data visualization techniques to make SCIs intuitive and enable the comparison of cities. The findings demonstrated large variations among smart cities, which points to the necessity of developing targeted policies and investments. With these contributions, we are confident that the paper will make a valuable addition to the existing knowledge and provided further guidance and recommendations for the stakeholders involved in a sustainable urban environment.
The article is devoted to the design and configuration of a secure network gateway for cloud applications based on modern VPN protocols OpenVPN and WireGuard. In the context of the rapid development of cloud technologies and the increasing number of cyberattacks, ensuring secure remote access to services has become a key task of information security. The paper discusses relevant threats arising during data transmission in cloud environments and highlights the role of VPN technologies in preventing attacks. The features of OpenVPN and WireGuard are analyzed in detail, including their architecture, cryptographic foundation, ease of configuration, and performance. The study presents a gateway architecture comprising a VPN server, firewall filters, and routing mechanisms that enforce mandatory transmission of all traffic through an encrypted tunnel. Experiments conducted in a virtualized VMware Workstation environment showed that WireGuard provides higher data transfer speeds and lower latency, while OpenVPN demonstrates flexibility and compatibility with corporate systems. The combined use of both protocols improves system resilience and adaptability. The practical significance of the research lies in the possibility of implementing the proposed architecture in corporate and private networks to protect cloud applications, organize secure remote employee access, and enhance the security level of information resources.
The paper deals with the development of a model for real-time recognition and classification of UAVs and birds based on the training of the YOLOv10 neural network. The research area is considered relevant in connection with the problems of UAV detection in the context of security, given their growing use in various fields. A dataset consisting of 6,255 images collected from proprietary archives and public resources is trained to train the model. The process of data annotation, augmentation and distribution was implemented using Roboflow.com service. The model was trained on NVIDIA GeForce RTX 4080 GPU using Ultralytics framework. Test results showed high recognition accuracy with mAP50 and mAP50-95 metrics exceeding previous versions of YOLO. The model demonstrates the ability for efficient object segmentation and tracking, which makes it promising for optoelectronic surveillance applications. The results of the study can be useful for developers of UAV and bird detection and classification systems, as well as for improving safety in various fields.
This study introduces a computational pipeline for the automated linguistic and structural analysis of legal texts, applied to the Code of Administrative Offenses of the Republic of Kazakhstan (CAO RK, K1400000235). The proposed workflow integrates data collection, text preprocessing, tokenization, keyword extraction, semantic clustering, and visualization using natural language processing (NLP) and statistical techniques implemented in Python. The pipeline unites lexical, thematic, and quantitative linguistic analyses into a coherent sequence that enables the identification of frequency distributions, semantic fields, and latent topics across the hierarchical structure of the Code (sections, chapters, and articles). The analysis of the CAO RK corpus revealed several distinctive linguistic patterns: a dominance of sanction and responsibility-related vocabulary (штраф, ответственность, правонарушение), high lexical density in chapters regulating economic and procedural offenses, and concentrated thematic clusters reflecting the normative-punitive orientation of administrative law. Visualization techniques such as frequency histograms, thematic heatmaps, and topic maps illustrate the potential of the pipeline for exploring legislative language quantitatively. Overall, the framework establishes a scalable foundation for comparative legal linguistics, automated legislative monitoring, and the modernization of legal analytics in Kazakhstan.
This article presents a comparative analysis of international benchmarking systems and their application to assessing urban livability and citizen engagement. The research examines key global indices – Economist Intelligence Unit (EIU), Mercer, PwC, TUWIEN Smart City Model, MIT Treepedia, and the National League of Cities (NLC) – to identify dominant trends and classification models of city benchmarking. Four major types of benchmarking practices were defined: multi-factor indices, single-indicator rankings, thematic analytical reviews, and diagnostic metrics. An empirical case study of Almaty, Kazakhstan, demonstrates the adaptation of international practices to the local context. Between 2020 and 2023, over 1,500 participatory projects were implemented under the Participatory Budget program, primarily in urban greening, infrastructure, and public safety. The findings show that digital governance platforms (Open Almaty, iKomek, Almaty Urban Center) enhance civic participation but remain limited by unequal digital access. The study concludes that benchmarking serves as an effective governance and evaluation tool for improving urban livability and inclusiveness. The Almaty case illustrates the potential of combining global best practices, data-driven governance, and participatory approaches to foster sustainable urban transformation.
In the era of accelerating climate change and growing urban populations, the frequency and severity of natural disasters have increased significantly, posing substantial threats to infrastructure, economic stability, and human lives. Disasters, including the likes of earthquakes, floods, and hurricanes usually are the reasons for serious destruction of buildings, requiring rapid and accurate assessment to aid in emergency response and resource allocation. In light of this, the research aims to deliver a deep learning based building damage assessment model, which is a hybrid architecture consisting of Artificial Intelligence and IOT. In this paper we will examine the use of Internet of Things (IoT) and Artificial Intelligence in disaster management systems in order to improve the automation, transparency, and sustainability in smart intelligence systems. The system should collect and analyze pre-disaster and post-disaster aerial imagery to classify buildings into damage categories, i.e. from no damage to destroyed.. Also, we integrate our model into a wide disaster management system in order to make a visualization of damages on a geospatial interface, that helps the decision-makers to get a quick look at priority areas and streamline the response of disaster. This system’s plan is to assist public authorities, NGOs, and first responders with quick decision making in postdisaster response times.
MATHEMATICAL SCIENCES
This paper presents a numerical approach for solving Fredholm integral equations of the first kind using the Bubnov–Galerkin method with Alpert wavelet bases. These equations are well-known for being ill-posed, meaning that small changes in input data can lead to large deviations in the solution. Therefore, robust and accurate numerical methods are essential. The proposed method utilizes orthonormal and compactly supported Alpert wavelets, which offer excellent localization properties and yield well-conditioned, sparse system matrices when projecting the integral operator. This enhances numerical stability and reduces computational complexity. A series of computational experiments was carried out using various refinement levels and polynomial degrees.
The accuracy of the method was evaluated by comparing approximate solutions to the exact analytical solution. The results demonstrate exceptionally small absolute errors, often approaching machine precision. Additionally, a comparative analysis with power polynomial bases confirms the superiority of the Alpert wavelet approach in terms of convergence and approximation quality. Overall, the method proves to be efficient, stable, and suitable for further extension to more complex integral equations, including multidimensional and noisy-data problems. This confirms the potential of Alpert wavelet-based Galerkin schemes as a reliable tool for the numerical treatment of inverse and ill-posed problems in applied sciences.
It is known that the class close-to-convex functions is defined by the condition of positivity of the functional with a starlike function . Replacing the starlike function with a convex one leads to a known subclass of the class . In this article, we introduce a generalization of the class to the case when the set of values is contained in a region of a special type, which, in a particular case, can coincide with a half-plane. The generalization is also associated with the extension of the class to a certain subclass of the class of normalized doubly close-to-convex functions. The diversity of special cases of a domain of a special type and the transition to doubly close-to-convex functions allows us to obtain both new original results and generalizations of previously known results. The main research of this article is aimed at proving theorems on distortion, finding the radii of convexity of the considered classes of functions and justifying the accuracy of the obtained results. A connection has also been established between the introduced class of functions and a certain new class of doubly close-to-starlike functions, special cases of which have been actively studied in recent years. For this class and its subclasses, new results have also been obtained in the form of theorems on growth and radius of starlikeness and generalization of previously known results.
We study a three-weight inequality for a superposition of the Copson, Hardy, and Tandori operators. The goal of this paper is to prove a complete characterization of the boundedness of the operator that is a combination of these three operators in weighted Lebesgue spaces from to . The main focus is on determining necessary and sufficient conditions under which this inequality holds for all non-negative measurable functions on the positive real axis. The notion of a fundamental function of a Borel measure with respect to an increasing function is used substantially. Since the Tandori operator is not a linear operator, we cannot use the duality methods used in earlier works. To solve this problem, we develop a new, simplified discretization method that avoids the complexities of previously known methods. An explicit form of the best constant in the inequality is obtained, demonstrating the accuracy and optimality of the results. By establishing necessary and sufficient conditions for the boundedness of these composite operators, we improve the inequalities previously established in the works of Gogatishvili A., Pick L., Opic B. [1]. The results obtained in the paper extend and complement existing research in the field of weighted inequalities and operator analysis in function spaces and offer potential applications in approximation theory, harmonic analysis and related areas.
In this article, we study the expansion of a structure by adding a new predicate that is not definable by any formula in the original language. To consider an externally definable expansion, we define the extension of a model in both essential and non-essential case. Such expansions can lead to significant changes in the properties of the resulting structure. We focus on the case of externally definable expansions, where the new relation is given by the intersection of a formula defined in an elementary extension with the original structure. The concept of a uniformly externally definable expansion was first introduced by Macpherson, Marker, and Steinhorn in the context of expansions by cuts in submodels of o-minimal structures over the real numbers. Subsequently, Baizhanov demonstrated that expanding a model of a weakly o-minimal theory by a family of convex sets preserves both weak o-minimality and uniform external definability. We establish conditions for external expansions under which the key properties of the original structure are preserved.
Linear approximation is the approximation of a function from a certain class by elements of a fixed finitedimensional subspace of that same class. For instance, for one-dimensional periodic functions, such elements are trigonometric polynomials. In the multidimensional case of interest, for functions periodic in each variable, this subspace is the set of trigonometric polynomials with a spectrum from a step hyperbolic cross. However, the question of selecting coefficients for these polynomials arises. This paper presents an apparatus for recovering functions from Sobolev spaces with a dominating mixed derivative based on given points, and establishes error estimates for the recovery. The method is based on constructing a recovery function in the form of a polynomial with a spectrum from a step hyperbolic cross, where the coefficients are calculated using the given points. The approximation error is of the order of the orthowidth, which is an optimal result for such polynomials. The proposed method is exact for polynomials with a spectrum from a step hyperbolic cross. Additionally, a functional that recovers the Fourier coefficients for functions from the indicated spaces is derived. Due to the explicit expression of the recovery function, the obtained formula can be used to solve applied problems.
This work addresses the averaging problem for the Chafee–Infante equation in a micro-heterogeneous medium. It analyzes the equation with rapidly oscillating terms and dissipation. The model introduces a small parameter and formulates an averaged problem based on its limits. The study proves that the trajectory attractors of the original equation converge to those of the averaged equation. It establishes the relevant theorem. In cases with unique solutions, it investigates the convergence of global attractors, adding further conditions on the nonlinear terms. The work also analyzes singular problems with nontrivial boundary conditions in perforated domains and on cavity boundaries. Here, the averaged equation differs from the original, reflecting the model’s effective averaged characteristics and possibly an extra potential term. As the small parameter approaches zero, the study shows that the attractors converge in Hausdorff distance. The main result is that the attractors of the original Chafee– Infante equation converge to those of the averaged (limit) equation as the small parameter approaches zero. This work establishes this convergence for the first time in the context of homogenization in a periodically perforated medium, highlighting the scientific novelty and relevance of the findings. This research advances averaging methods for differential operators and may apply to solutions of nonlinear differential equations.
PHYSICAL SCIENCES
This study presents results of high-resolution spectroscopic observations of the star MWC 645, a representative of the poorly studied FS CMa-type objects. These stars are characterized by strong emission lines and significant infrared excess caused by circumstellar dust. For the first time, a cool secondary component has been detected in this system. Fundamental parameters were determined for both components: for the hot (B-type) star, an effective temperature of 18,000 ± 2000 K and luminosity log(L/L☉ ) = 3.9 ± 0.4; and for the cool (K-type) star, Teff = 4250 ± 250 K and log(L/L☉ ) = 3.1 ± 0.4. The system is located at a distance of 6.5 ± 0.9 kpc and shows clear signs of active interaction, including ongoing mass transfer indicated by the complex profiles of many emission lines. The results confirm the binary nature of MWC 645 and its classification as an FS CMa-type object. This work highlights the need for further observations to refine the system’s parameters and improve our understanding of the structure and origin of its circumstellar environment.
This study investigates the effects of ionic cores on non-isothermal plasmas using a novel ion-ion interaction potential that incorporates screening effects from both ion cores and exchange-correlation interactions. Our findings indicate that with increasing distance, the effective potential approaches a Yukawa-like screening potential, while at shorter distances, strong electron binding weakens screening. The different values of the cutoff radius and core edge steepness significantly influence the potential behavior and radial distribution functions (RDFs). Higher coupling parameters ( ) strengthen the electron-ion interactions, leading to deeper potential wells and more pronounced non-ideality corrections. Increasing decreases the absolute values of non-ideality corrections, indicating fewer interactions in the system. A larger cutoff radius at a fixed parameter also reduces corrections due to weaker screening effects. As increases, non-ideality corrections grow, reflecting stronger coupling. The results show the importance of taking into account the ion core effects in dense non-isothermal plasma research.
In this work, a method for synthesizing antimony selenide (Sb2Se3) thin films is presented, along with investigations of their morphology, structural, and optical properties. The synthesis method consisted of two stages. In the first stage, an antimony precursor film was deposited by magnetron sputtering. In the second stage, selenization was carried out in selenium vapor at a temperature of 400 °C for 10 minutes. The morphology of the obtained films was examined using scanning electron microscopy. The morphological analysis showed that the film has a polycrystalline structure with good adhesion to the silicon substrate.The elemental composition of the film was analyzed by energy-dispersive X-ray spectroscopy (EDS). According to the EDS results, the atomic percentage ratio of Se/Sb was 1.59, indicating that the obtained film is close to stoichiometric composition. The EDS data were confirmed by phase composition analysis performed using X-ray diffraction. It was found that the film crystallizes in the orthorhombic structure (Pnma). No secondary phases were detected in the structure. To investigate the optoelectronic properties of the film, a reflectance spectrum was recorded. From the reflectance spectrum, the band gap energy was determined using the Tauc method and found to be 1.69 eV, which is optimal for applications in optoelectronic devices.
This scientific article investigates the structure and surface properties of Ti-Au-based thin coatings obtained by the PVD (magnetron sputtering) method in a vacuum environment at a high temperature of 450°C under direct current (DC) and radio frequency (RF) modes. The thin coatings deposited using the NanoPVD system had an average thickness of about 1 µm. Experimental studies showed that the chemical composition, microstructure, and properties of the coatings are closely dependent on the technological parameters of the magnetron sputtering process. The effect of Ag and Cu additions on the microstructure and mechanical properties of the Ti-Au coatings was studied using scanning electron microscopy (SEM), energy-dispersive X-ray spectroscopy (EDX), and X-ray diffraction (XRD). The surface roughness characteristics of the coatings were analyzed by atomic force microscopy (AFM), and nanoindentation and tribological tests were carried out to comprehensively evaluate the changes in their properties. It was shown that the modification of the Ti6Al4V substrate surface by the PVD method increases its wear resistance and reduces the coefficient of friction. X-ray phase analysis results revealed that the improvement in the tribological properties of the obtained thin coatings is directly related to the formation of the primary Ti3Au phase and the increase in its content.
This paper presents the results of a study of the optical properties of hydrogen, specifically the reflection and refraction coefficients of electromagnetic waves, with the permittivity of the substance described by the generalized Drude–Lorentz model. This paper examines the longitudinal and transverse spectra of microscopic ion current oscillations in hydrogen at various temperatures and densities, and analyzes the influence of electron exchange and correlation effects. The study was conducted using the effective interaction potential, taking into account a local field correction obtained from quantum Monte Carlo simulations. The use of accurate local field models, such as approximations based on quantum statistical calculations, allows for the reliable reproduction of the transport and optical properties of dense electron systems. In particular, taking into account the local field function leads to significant corrections in the calculations of optical and dynamic properties, which is critical for modeling hot dense matter, metallized plasma, and degenerate electron systems. In addition, the presence of an exact form of the local field function allows us to correctly describe the optical and dynamic properties of the plasma, including the reflection and absorption coefficients and natural oscillation modes.
Currently, the development of electrochemical energy storage systems plays a key role in meeting the growing demand for electric power. Metal-air batteries (MAB) with high specific energy capacity are considered as promising solutions for use in power plants as backup power sources. One of the main limitations of their widespread implementation is the need to improve the efficiency of anode materials. In this paper, we propose the use of porous aluminum electrodes to improve the performance of MAB. Two types of porous anodes manufactured using different technologies were studied. For the powder aluminum anode, the current density was 20–30 mA/ cm², which is comparable to the performance of a monolithic (standard) anode. At the same time, foam aluminum demonstrated higher current density values of 52–64 mA/cm². An additional advantage of porous anodes is their reduced weight (by 10–30%), which helps to improve the weight and size characteristics of MAB and opens up opportunities for creating more efficient energy systems.
OIL AND GAS ENGINEERING, GEOLOGY
The relevance of this study is due to the need for a detailed study of recoverable oil reserves in reservoirs of productive horizons, which is associated with the conditions of sedimentation and deposit formation processes within the South Turgai sedimentary basin. A comprehensive analysis, including granulometric studies of terrigenous sediments, is required to accurately determine the conditions of rock formation. This approach allows to obtain reliable data on the depositional environment and to perform a more detailed facies analysis. The aim of the study is to determine the facies of sedimentation of the South Kumkol productive horizon Y-II using granulometric analysis. The methodology of the study includes application of various methods of granulometric analysis. The generalised Fuchtbauer and Muller definition of depositional setting was used as the main approach. Genetic interpretation of sediments is based on the K. Björlikke diagram analysing the ratio of sorting and asymmetry of particle distribution. The G.F. Rozhkov dynamogenetic diagram, which takes into account the ratio of asymmetry and excess, was used to clarify sedimentation conditions. The results of the study make it possible to characterise more objectively the conditions of reservoir rock formation and specify sedimentation parameters. An integrated approach to the analysis of granulometric data helps to improve the accuracy of facies diagnostics of oil-bearing horizons.
Polymer flooding is one of the key technologies for enhancing oil recovery. Partially Hydrolyzed Polyacrylamide (HPAM) is widely used due to its excellent viscosity-increasing properties. However, the adsorption and retention behavior of HPAM in reservoir porous media presents a dual effect: on one hand, it improves sweep efficiency by increasing flow resistance; on the other hand, it leads to a loss in effective polymer concentration and viscosity, reducing displacement efficiency and increasing costs. Therefore, a systematic understanding and control of HPAM adsorption behavior are crucial for improving the effectiveness of polymer flooding. This work systematically reviews seven main measurement methods for HPAM adsorption quantity, comparing their applicable conditions and limitations. It summarizes the key factors influencing HPAM adsorption and retention behavior from three aspects: polymer properties, rock mineral characteristics, and reservoir environmental conditions. Furthermore, it outlines chemical anti-adsorption methods, represented by competitive adsorption and nanofilm protection, along with their mechanisms. Finally, future research directions are proposed, focusing on building adsorption prediction models, deepening the understanding of adsorption mechanisms under multi-field coupling conditions, and developing novel functional polymers with anti-adsorption capabilities.
The article presents the results of geological additional study of the Lisakovsk area (sheets N-41-XXVII and N-41-XXXIII), which made it possible to significantly clarify the geological structure and prospects of ore content of this important metallogenic territory. Based on the analysis and reinterpretation of a large volume of stock and archival materials, as well as the results of our own field studies, the stratigraphic scheme of the area has been refined. For the first time in the Denisov structural-formational zone, a complex of metamorphic rocks of the Precambrian-Lower Paleozoic has been identified. The stratigraphic position of the Lisakovsk stratum of oolitic iron ores has been clarified. A digital model of the cover complex has been created, allowing for the compilation of horizon maps for all stratigraphic units. New data on the tectonic structure of the junction zone of the Denisov and Valeryanovsk structural-formational zones have been obtained. A new geodynamic model for the formation of this structure has been proposed. Minerogenic zoning of the territory was carried out with the identification of promising areas for various types of minerals. Minerogenic zones, districts, ore nodes and ore fields prospective for iron, copper, polymetals, gold, silver, and bauxite have been identified. Areas are recommended for prospecting for gold-copper-porphyry, pyrite-polymetallic, gold-quartz-vein types of mineralization. The Northern and Alakolsky-1 areas are designated as priority sites. The obtained results provide a reliable basis for further study of the territory and targeted exploration work in the identified promising areas.
Polymer flooding is one of the key technologies for enhancing oil recovery. Partially Hydrolyzed Polyacrylamide (HPAM) is widely used due to its excellent viscosity-increasing properties. However, the adsorption and retention behavior of HPAM in reservoir porous media presents a dual effect: on one hand, it improves sweep efficiency by increasing flow resistance; on the other hand, it leads to a loss in effective polymer concentration and viscosity, reducing displacement efficiency and increasing costs. Therefore, a systematic understanding and control of HPAM adsorption behavior are crucial for improving the effectiveness of polymer flooding. This work systematically reviews seven main measurement methods for HPAM adsorption quantity, comparing their applicable conditions and limitations. It summarizes the key factors influencing HPAM adsorption and retention behavior from three aspects: polymer properties, rock mineral characteristics, and reservoir environmental conditions. Furthermore, it outlines chemical anti-adsorption methods, represented by competitive adsorption and nanofilm protection, along with their mechanisms. Finally, future research directions are proposed, focusing on building adsorption prediction models, deepening the understanding of adsorption mechanisms under multi-field coupling conditions, and developing novel functional polymers with anti-adsorption capabilities.
OIL AND GAS ENGINEERING, GEOLOGY
Polymer flooding is one of the key technologies for enhancing oil recovery. Partially Hydrolyzed Polyacrylamide (HPAM) is widely used due to its excellent viscosity-increasing properties. However, the adsorption and retention behavior of HPAM in reservoir porous media presents a dual effect: on one hand, it improves sweep efficiency by increasing flow resistance; on the other hand, it leads to a loss in effective polymer concentration and viscosity, reducing displacement efficiency and increasing costs. Therefore, a systematic understanding and control of HPAM adsorption behavior are crucial for improving the effectiveness of polymer flooding. This work systematically reviews seven main measurement methods for HPAM adsorption quantity, comparing their applicable conditions and limitations. It summarizes the key factors influencing HPAM adsorption and retention behavior from three aspects: polymer properties, rock mineral characteristics, and reservoir environmental conditions. Furthermore, it outlines chemical anti-adsorption methods, represented by competitive adsorption and nanofilm protection, along with their mechanisms. Finally, future research directions are proposed, focusing on building adsorption prediction models, deepening the understanding of adsorption mechanisms under multi-field coupling conditions, and developing novel functional polymers with anti-adsorption capabilities.
ECONOMY AND BUSINESS
The study examines the modernization processes of production capacities at JSC “NAC Kazatomprom” through a comprehensive strategic analysis based on SWOT, PESTLE, and BCG-matrix assessment methods. As the world’s largest uranium producer, the company faces the dual challenge of maintaining its cost leadership and adapting to global market volatility, technological change, and growing ESG requirements. The research identifies key drivers of modernization including increasing extraction efficiency, capital investment growth, digitalization, and technological innovation — as well as constraints related to rising production costs and external geopolitical risks. The analysis demonstrates that modernization is essential for sustaining Kazakhstan’s competitive position in the global nuclear fuel market and ensuring long-term operational stability. Practical implications include recommendations for optimizing investment programs, enhancing technological upgrading, and strengthening strategic resilience in the context of the global energy transition.
Integration of sustainability in construction projects control systems has emerged as an urgent need due to the growing pressure in the environment and the resources toward development of infrastructure projects globally. Whereas traditional Earned Value Management (EVM) has been embraced to track cost and schedule, the performance, it fails to integrate environmental and resource-efficiency indicators that are becoming more significant in affecting the project outcomes. This paper suggests a comprehensive EVM framework, which incorporates three sustainability measures of Carbon Emissions (CE), Energy Consumption (EC) and Material Waste (MW) in the project performance evaluation. The Multiple Linear Regression (MLR) model used, based on nine months of actual operational history of Malir Expressway project in Karachi, Pakistan which is an ongoing large-scale public infrastructure project. The findings indicate that the variables of sustainability play a significant role in determining the Cost Performance Index (CPI) and Schedule Performance Index (SPI). The suggested model enhances the decision-making process because the project stakeholders can monitor the financial and environmental aspects at the same time. Its methodology is practically applicable to the public infrastructure development and will be a base point to the future development of Sustainable Earned Value Management frameworks.
Announcements
2023-10-09
Dear authors!
Articles in the section "Economics and Business" are accepted in English only.
| More Announcements... |
ISSN 2959-8109 (Online)





