Proceedings of the 3rd International Conference on Computing Innovation and Applied Physics
Yazeed Ghadi, Al Ain University
This work investigates the vertical and horizontal characteristic velocities of salt-finger convection over different density ratios by experimentally adding various concentrations of salty, hot water into cold, fresh water in a water tank. salt fingers are visualized by dye and its displacement over time is obtained by recording and analysing video. These experiments effectively generate the phenomena resembling salt-finger micro-structure in the ocean. The vertical velocity is significantly increased by lowering the density ratio corresponding to a larger salinity difference between the top and bottom showing a stronger destabilizing effect. Tilted fingers that resemble the experimental and oceanography observations are also observed. We also observe a non-zero horizontal velocity implying the presence of staircases. Finger widths obtained from the experiments are compared with that predicted from the linear stability analysis, which is within the same order of magnitude.
In the era of big data, survival analysis, a statistical method for analyzing the expected duration of time until one or more events happen, has gained significant importance, especially in medical and biological research. This paper primarily focuses on the comprehensive exploration and understanding of survival analysis modelling, from traditional to modern approaches, and identifies the existing challenges and future prospects of these models. We commence by discussing foundational models such as the Kaplan-Meier and Cox proportional hazards models, and then transition into the exploration of the more flexible Accelerated Failure Time model. Acknowledging the current challenges faced in survival analysis, such as dealing with high-dimensional data, lack of labelled data, and data quality and reliability, we further delve into the potential solutions provided by modern techniques like deep learning, transfer learning, and semi-supervised learning. Additionally, the paper highlights the issues of interpretability and transparency of complex models, offering an overview of interpretability methods such as LIME and SHAP. Despite certain limitations, our study offers a valuable reference for understanding the evolution of survival analysis and sparks further discussions about its future development, emphasizing the profound significance of survival analysis in the realm of statistical research.
Structured beams have been extensively studied in the last ten to twenty years. Due to its excellent spatial characteristics, it has been widely used in the fields of optical communication, optical tweezer and particle manipulation. This paper first analyzes and summarizes the formation mechanism of structured beams. Then, based on the eigenmode superposition theory, the numerical simulation was carried out for the first three-orders of Hermitian-Gaussian (HG) eigenmodes. At the same time, some complex structured beams were obtained through experiments. The structured beams obtained from experiments are in good agreement with the numerical simulation results, which further verifies that the eigenmode superposition method is an effective way to realize complex structured beams.
A mapping that satisfies two specific axioms provides a common notion of group action. A homomorphism translating from a group to a symmetric group of a certain set can also be used to describe group action. Therefore, any example of the group actions can be stated based on the second equivalent definition, such as the regular action, natural matrix action, coset action, and Z^2 acting on R^2, etc. It is necessary to examine the concepts of the orbit and stabilizer of a group in order to reveal the orbit-stabilizer theorem. After the preparatory work, the orbit-stabilizer theorem can be proved by defining a mapping from the orbit to the stabilizer and then checking that the mapping is well-defined and bijective. To derive Burnside’s lemma, it needs to introduce the set of fixed points which is related to the concept of the stabilizer. Through the orbit-stabilizer theorem along with the fact that a set is a disjoint union of orbits, Burnside's lemma can be confirmed. Moreover, it is natural to compose a group action with a linear representation, and then a representation would be obtained, which is permutation representation. Further, one must calculate the character of the permutation representation, the dimension of the fixed subspace, and the dimension of CX^G. Then it can show Burnside’s lemma in another way by permutation representation.
The SIR model was used to better comprehend and analyse the transmission dynamics of COVID-19. This mathematical framework splits the population into three compartments: suspectable, infectious, and recovered, allowing disease spread to be simulated across time. After making some essential assumptions of SIR model, the project illustrates the rate of suspectable, infected, recovered individuals over time by constructing several differential equations using specific parameters. Also, SIR model gives insights into expected disease trajectories, the impact of therapies, and other pertinent discoveries by including critical factors and assumptions. Researchers successfully anticipate disease trajectories using this simulation, indicating the usefulness of actions in preventing viral propagation. Researchers have found that the incubation period of COVID-19 has vital impact on the epidemic curve, which results in a slower growth in the number of infected people overtime and a delay in the upward slope of the infectious in the epidemic curve. The SIR model’s examination of epidemic curves has assisted in identifying the peak of infections, estimating the duration of outbreaks, and assessing the efficiency of public health measures in various context. Further study, continued data collecting, and integration with real-world data will improve the accuracy and usefulness of the SIR model, enabling evidence-based ways to combating COVID-19’s issues.
The Chinese remainder theorem (denoted it as " the theorem" in this article) was originally an important theorem in number theory. It played a vital role in the integer solution of the congruence equation in ancient times. With the continuous development of the algebraic system, the theorem naturally has different forms. This paper will show some research and applications based on the theorem. For example, the theorem in polynomial form, the theorem in the form of group theory, the theorem on unitary rings, the theorem on polynomial ring modules, etc. It is not difficult to know that integers and polynomials are special rings, so this the two forms of the theorem are the theorems that can be covered on the unitary ring. In fact, the theorem in the form of group theory is also covered. This paper will elaborate the first three forms of the theorem and give their specific applications.
Concrete strength prediction is a complex nonlinear regression task that involves multiple ingredients and age as key factors. In order to achieve accurate predictions, the Markov Chain Monte Carlo (MCMC) and Gaussian Process Regression (GPR) techniques are employed. The dataset, sourced from Kaggle repositories, comprises a comprehensive collection of 1030 data points. Alongside the existing features (content of ingredients, age and strength), we introduce new ones, including water-cement ratio, sand ratio, and water-binder ratio, to enhance the model's credibility. To determine the optimal kernel function, the dataset is partitioned into training and testing subsets. Notably, the MCMC method yields an R2 of 0.41, while GPR demonstrates a significantly improved R2 of 0.89. Further investigation is warranted to refine the model's fit and optimize its predictive capacity.
Frequent global earthquakes lead to catastrophic property damage and casualties due to building collapses. However, the 1976 Tangshan Earthquake showcased an exception: the Forbidden City, constructed with wooden materials and featuring mortise-tenon structures, remained unscathed among surrounding destruction. This study investigates the earthquake-resilient attributes of wooden mortise-tenon joints utilizing economical high school equipment. An innovative low-cost sensor system, featuring custom instrumented hammer, is developed and validated. Calibration of the hammer's impact force employs correlation with acceleration data from a standardized scale weight during impact. The system's reliability is tested by comparing resonance frequencies from Finite Element modal analysis and experimental data for a cantilever beam. Impact hammer tests assess frequency response and damping across buildings with various joint configurations. Mortise-tenon joints display augmented frictional damping due to internal displacement. Through simulated vibration acceleration responses, a crucial finding emerges---integration of mortise-tenon joints translates to an impressive 11.0% reduction in earthquake vibrations. This research underscores the potential of accessible high school devices in advancing seismic engineering insights.
With the development of science and technology, more and more civil and military aircraft have adopted swept-back wing shapes. this paper explores the relationship between the angle of swept-wing and the lift, drag and lift-drag ratio of the aircraft, and expounds it in combination with the actual swept-wing aircraft, such as Boeing737 and MIG-23 to find the best sweep angle of the aircraft in the appropriate range and analyze the effect of swept wing angle on flight speed. This paper studies and analyzes the data through literature retrieval and data processing and cites data models from many academic journals. The experimental data are mainly derived from computational fluid dynamics and wind tunnel simulation. This paper finds that a wider sweep angle can result in better aerodynamic performance, suited for supersonic flight, while a smaller sweep angle can result in a better lift-drag ratio, it is suitable for takeoff and landing of aircraft and low-speed flight attitude.
In the past decades, taking an airplane has become one of the fastest travel methods, including transportation between two cities far away from each other or between two different countries across the earth. Jet engines take over most of the engine market on airplanes; while having over 20% of the cost to build a plane, economic efficiency is the first thing to consider. After that, due to the pollution of its fuel-burning process, environmentally friendly became another thing that needs to be valued. In this paper, the history and the basic principle of the jet engine will be demonstrated. Some recent improvements in jet engines would be provided, such as the Diverterless Supersonic Inlet (DSI) and a way to improve the component’s strength in building process solidification crystallization control technology. A way to improve both the service life and the component strength would be to discuss different materials, along with the looking-forward perspectives on jet engine advancement.
Stirling engines are a kind of heat engine that achieve their function via compressing and expanding the working fluid at temperatures that are entirely different from one another. The Stirling cycle consists of four processes, two of which are isothermal and the other two are isochoric. Stirling engines may be assembled in one of three distinct configurations: alpha, beta, or gamma. They are superior to traditional heat engines, with high efficiency, low noise, and low pollution. Stirling engines may be found in various systems that produce power and those used for mechanical propulsion, heating and cooling, and other similar applications. In today's world, programs such as this are really helpful. On the other hand, they have some drawbacks, such as a low power density, a high price tag, a sluggish beginning-of-operation and reaction time, and restricted availability. This page covers a variety of topics pertaining to Stirling engines, including their history, composition, characteristics, applications, and use, as well as some of the likely next advancements in the field.
Electric vehicles, as a new technological advancement in the automotive industry for addressing energy and environmental issues, have become a hot topic in both domestic and international car development. The electric vehicle generator, which plays a crucial role, is a key component of electric vehicles. The Alternating Current (AC) motor in electric vehicles is highly efficient and widely used in various fields. Therefore, studying AC motors holds significant theoretical and practical significance. This article first explains the car model and the principle of the electric motor, followed by an analysis of the Proportional–Integral–Derivative (PID) principle. Finally, through PID control, the simulation of the electric vehicle's speed is conducted to achieve better stability and economy. The experiments demonstrate that by improving the phase margin and stability of the lag compensator and feedback loop system, the driving comfort and practicality of electric vehicles can be effectively enhanced.
Since contemporary information-retrieval systems rely heavily on the content of titles and abstracts to identify relevant articles in literature searches, great care should be taken in constructing both. Inclined photogrammetry is an emerging surveying and mapping technology with the advantages of real 3D and multi-view angle, but the huge amount of inclined photogrammetry data makes the application scenarios limited, which leads to the high cost of acquiring inclined photogrammetry data. To address this problem, the lightweighting technology of inclined photogrammetric data is studied. Firstly, the characteristics of inclined photogrammetric data are analysed, the key technology of lightweighting inclined photogrammetric data is studied, and the compression algorithm based on the improved triangular mesh model and the compression algorithm based on the regional chunking model are proposed, which solves the problem of the large volume of inclined photogrammetric data and the inconvenience of using it. The experimental results in small-scale mapping show that the lightweighting of inclined photogrammetric data can be effectively achieved based on the improved triangular mesh model compression algorithm and the regional chunking model compression algorithm, and the optimised combination of the improved triangular mesh model compression algorithm and the regional chunking model compression algorithm can effectively achieve the lightweighting of inclined photogrammetric data.
Since contemporary information-retrieval systems rely heavily on the content of titles and abstracts to identify relevant articles in literature searches, great care should be taken in constructing both. Though the vehicle based on bang-bang control could achieve line-following, it could not turn smoothly and there will be severe shaking when driving. Based on this, this article presents a vehicle that relies on PID control. And this article provides detailed information on making a line-following vehicle with RT control, which is based on Arduino. This vehicle could track the black line from the white environment. The process of line following was achieved mainly by the PID control. Therefore, the parameters of materials for building the vehicle, as well as the logic of PID control would be introduced in the following parts of the article. After adjusting Kp, Ki, and Kd several times, the vehicle could achieve the basic function of line-following. This paper will introduce the process of the whole experiment and analyze the results.
The United Nations’ Sustainable Development Goals has mentioned to reduce child mortality. That is also a crucial indicator of human progress. The UN hopes that all countries will eradicate preventable deaths of newborns at the end of 2030. Cardiotocogram (CTG) can be used to identify in-danger women during pregnancy. The aim of this article is to apply machine learning algorithm techniques on CTG data to ensure fetal well-being. CTG data of 2126 samples and 22 variables were obtained from the CTG exams on Kaggle. Two different classification models were trained through the data. In order to predict ‘Normal’, ‘Suspect’, and ‘Pathological’ fetal states, each class had its own sensitivity, precision and F1 score. Each model has its overall accuracy. Determined by obstetricians’ interpretation of CTG, ‘Normal’ state accounted for 57%, ‘Suspect’ state accounted for 23% and ‘Pathological’ state accounted for 20%. The classification models generated by Logistic Regression and Random Forest to predict the suspect and pathological state of the fetus by tracing CTG. They had high precision of 86% and 94% respectively. However, the classification model developed by Random Forest had higher prediction accuracy for a negative fetal outcome. Healthcare workers without professional training in low-income countries have the opportunity to utilize this model for the purpose of prioritizing pregnant women in hard-to-reach regions, ensuring they receive timely referrals and appropriate follow-up care.
Traditional approaches to using data structures mainly focus on the improvement in algorithm efficiency but rarely take advantage of object-oriented programming that can simulate the definition of mathematic concepts in a way that is similar to human thinking mode, so traditional approaches cannot satisfy the need of simulating sets and their features and operations for mathematics studies. In this paper, the author pointed out the disadvantages of traditional ways, proposed a series of hypotheses describing the relationship between sets and classes such as inheriting and inclusion relationships and a way based on those hypotheses using features of object-oriented programming, and used C#, one of the best object-oriented programming languages, to simulate sets and their features. In addition, for each hypothesis, the author raised examples with C# codes that realize the theory in this paper, clearly showing the approach proposed in this paper and why it is both efficient and elegant.
Diffraction is an optical phenomenon that is commonly investigated for its applications in many optical systems, such as diffractive optical elements, microscopy, and coronagraphs. Current models for predicting diffraction typically suffer from either efficiency or accuracy. This paper addressed both issues by implementing techniques inspired by Braunbek method and Bluestein method. A modification to the Kirchhoff’s boundary conditions is used to improve the theoretical model, and the Chirp-z transform is applied instead of the fast Fourier transform for more flexible calculations. A comparison between diffraction patterns for different models shows that the new method exceeds in accuracy. A comparison of time between numerical methods demonstrates that the chirp-z transform is faster in computation than the fast Fourier transform by about a minute. The method introduced provides many implications, such as the enhancement of dynamic optical systems and the improvement of flexibility in other realms of numerical Fourier transform.
The Rubik’s Cube is a widely recognized puzzle. The mathematics behind the Rubik’s Cube is group theory. Group theory studies algebraic structures in mathematics such as groups, rings, and fields. The operation of the Rubik’s Cube is rotation, which can be considered an operation of a group. The combination of two rotations of the Rubik’s Cube can be considered the association of two operations of a group. The rotations and the combination operation of two rotations form a group called the Rubik’s Cube group, and this paper presents the order of this group which is also the quantity of possible valid configurations of the Rubik’s Cube. The valid configurations are the configurations that can be reached by a series of rotations from the starting configuration. This paper presents a method to illustrate the configurations of the Rubik’s Cube, the requirements for making the configurations valid, and calculate the quantity of possible valid configurations.
The classification of finite groups is an important topic in mathematics throughout history of mathematics. The topic of this paper is to use group action as a tool, to classify some special finite groups and some low order groups. First this paper introduces some concepts of group action. Then this paper states and proves some important theorems related to group action. For example, the Sylow’s theorem, which is very important in this paper. Research has found that, groups of specific order, such as groups whose order are 2p,p^2, pq(p, q are distinct prime numbers), p3(p is prime) can be classified using group action and the technique of semi-direct product, and groups whose order are no more than 15 are classified which can be seen as the special situations of the above ones. But in general, to make classification of a larger range of finite groups, more tools should be introduced.
With the development of mathematics, more and more fields of study have been created and progressed, it is worth to implement the knowledge into the real life. There is a well-known puzzle called the Rubik’s Cube, has many connections to a branch of abstract algebra – group theory. Therefore, this paper will discuss how the Rubik’s Cube showing the properties from group theory, by introducing basic knowledges of group theory, followed by examples in terms of this intelligent toy. This paper will first introduce the properties of the Rubik’s Cube, then move to the construction of its group. Subsequently, the four axioms that form a group are explained. After that, the reasons why the operations of the Rubik’s Cube are able to form a group are explained as the examples of those four axioms. It is followed by the concepts in group theory, and provisions of the exemplifications in terms of the Rubik’s Cube, such as closure, cyclicity, Cayley’s graph. Explaining the group theory from the perspective of the Rubik’s Cube provides a tangible channel to learn the intangible knowledges effectively. Learners are able to study these hard knowledges easily by rotating a simple toy and observing the conclusions.