Social networks provide a fairly wide range of data that allows one way or another to evaluate the effect of the dissemination of information. This article presents the results of a study that describes methods for determining the key parameters of the model needed to analyze and predict the dissemination of information in social networks. An approach based on the analysis of statistical data on user behavior in social networks is proposed. The process of evaluating the main features of the model is described, including the mathematical methods used for data analysis and information dissemination modeling. The study aims to understand the processes of information dissemination in social networks and develop recommendations for the effective use of social networks as a communication and brand promotion tool, as well as to consider the analytical properties of the classical susceptible-infected-removed (SIR) model and evaluate its applicability to the problem of information dissemination. The results of the study can be used to create algorithms and techniques that will effectively manage the process of information dissemination in social networks.
A general stochastic information diffusion model in social networks based on ...IJCNCJournal
Social networks are an important infrastructure for information, viruses and innovations propagation. Since users’
behavior has influenced by other users’ activity, some groups of people would be made regard to similarity of users’
interests. On the other hand, dealing with many events in real worlds, can be justified in social networks; spreading
disease is one instance of them. People’s manner and infection severity are more important parameters in
dissemination of diseases. Both of these reasons derive, whether the diffusion leads to an epidemic or not. SIRS is a
hybrid model of SIR and SIS disease models to spread contamination. A person in this model can be returned to
susceptible state after it removed. According to communities which are established on the social network, we use the
compartmental type of SIRS model. During this paper, a general compartmental information diffusion model would
be proposed and extracted some of the beneficial parameters to analyze our model. To adapt our model to realistic
behaviors, we use Markovian model, which would be helpful to create a stochastic manner of the proposed model.
In the case of random model, we can calculate probabilities of transaction between states and predicting value of
each state. The comparison between two mode of the model shows that, the prediction of population would be
verified in each state.
Social Media Datasets for Analysis and Modeling Drug Usageijtsrd
This paper based on the research carried out in the area of data mining depends for managing bulk amount of data with mining in social media on using composite applications for performing more sophisticated analysis. Enhancement of social media may address this need. The objective of this paper is to introduce such type of tool which used in social network to characterised Medicine Usage. This paper outlined a structured approach to analyse social media in order to capture emerging trends in medicine abuse by applying powerful methods like Machine Learning. This paper describes how to fetch important data for analysis from social network. Then big data techniques to extract useful content for analysis are discussed. Sindhu S. B | Dr. B. N Veerappa "Social Media Datasets for Analysis and Modeling Drug Usage" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd25246.pdfPaper URL: https://www.ijtsrd.com/engineering/computer-engineering/25246/social-media-datasets-for-analysis-and-modeling-drug-usage/sindhu-s-b
Supervised Multi Attribute Gene Manipulation For Cancerpaperpublications3
Abstract: Data mining, the extraction of hidden predictive information from large databases, is a powerful new technology with great potential to help companies focus on the most important information in their data warehouses. Data mining tools predict future trends and behaviours, allowing businesses to make proactive, knowledge-driven decisions. The automated, prospective analyses offered by data mining move beyond the analyses of past events provided by retrospective tools typical of decision support systems.
They scour databases for hidden patterns, finding predictive information that experts may miss because it lies outside their expectations. Data mining techniques are the result of a long process of research and product development. This evolution began when business data was first stored on computers, continued with improvements in data access, and more recently, generated technologies that allow users to navigate through their data in real time. Data mining takes this evolutionary process beyond retrospective data access and navigation to prospective and proactive information delivery.
BINARY TEXT CLASSIFICATION OF CYBER HARASSMENT USING DEEP LEARNINGIRJET Journal
This document discusses the development of a cyberharassment detection system to identify abusive content on social media platforms. It reviews related works that have used machine learning techniques like convolutional neural networks and transfer learning models to detect cyberbullying. The authors investigate four neural network optimizers - Rmsprop, Adam, Adadelta, and Adagrad - and find that Rmsprop achieved the highest accuracy of 98.45% at classifying harassing content. The goal of this research is to create an effective model for automatically detecting cyberharassment online.
Predicting user behavior using data profiling and hidden Markov modelIJECEIAES
Mental health disorders affect many aspects of patient’s lives, including emotions, cognition, and especially behaviors. E-health technology helps to collect information wealth in a non-invasive manner, which represents a promising opportunity to construct health behavior markers. Combining such user behavior data can provide a more comprehensive and contextual view than questionnaire data. Due to behavioral data, we can train machine learning models to understand the data pattern and also use prediction algorithms to know the next state of a person’s behavior. The remaining challenges for this issue are how to apply mathematical formulations to textual datasets and find metadata that aids to identify the person’s life pattern and also predict the next state of his comportment. The main idea of this work is to use a hidden Markov model (HMM) to predict user behavior from social media applications by analyzing and detecting states and symbols from the user behavior dataset. To achieve this goal, we need to analyze and detect the states and symbols from the user behavior dataset, then convert the textual data to mathematical and numerical matrices. Finally, apply the HMM model to predict the hidden user behavior states. We tested our program and identified that the log-likelihood was higher and better when the model fits the data. In any case, the results of the study indicated that the program was suitable for the purpose and yielded valuable data.
Collusion-resistant multiparty data sharing in social networksIJECEIAES
The number of users on online social networks (OSNs) has grown tremendously over the past few years, with sites like Facebook amassing over a billion users. With the popularity of OSNs, the increase in privacy risk from the large volume of sensitive and private data is inevitable. While there are many features for access control for an individual user, most OSNs still need concrete mechanisms to preserve the privacy of data shared between multiple users. The proposed method uses metrics such as identity leakage (IL) and strength of interaction (SoI) to fine-tune the scenarios that use privacy risk and sharing loss to identify and resolve conflicts. In addition to conflict resolution, bot detection is also done to mitigate collusion attacks. The final decision to share the data item is then ascertained based on whether it passes the threshold condition for the above metrics.
Intelligent analysis of the effect of internetIJCI JOURNAL
This paper analyzes the effect of Information Technology on professionals, academicians and students in
perspective of their relations, education, job purpose, health, entertainment and electronic business that
can bring changes in society. Technology can have both positive and negative consequences for people of
different walks of life at different times. The need is to understand the true impact of internet on the society
so that people can start thinking and build a healthy society. In this paper, an empirical study is considered
60 persons; a causal loop model is formed relating the parameters on the basis of data collected. These
parameters are used to form the fuzzy dynamic model to analyze the effect of the internet on the society.
The model is analyzed and suitable solutions are proposed to counter the negative effect of internet on our
society.
Social networking sites are a significant source of information to know the behavior of users and to know
what is occupying society of all ages and accordingly helpful information can be provided to specialists
and decision-makers. According to official sources, 98.43% of Saudi youth use social networking sites. The
study and analysis of social media data are done to provide the necessary information to increase
investment opportunities within the Kingdom of Saudi Arabia, by studying and analyzing what people
occupy on the communication sites through their tweets about the labor market and investment. Given the
huge volume of data and also its randomness, a survey of the data will be done and collected from through
keywords, the priority of arranging the data, and recording it as (positive - negative - mixed). The study
analysis and conclusion will be based on data-mining and its techniques of analysis and deduction
.
A general stochastic information diffusion model in social networks based on ...IJCNCJournal
Social networks are an important infrastructure for information, viruses and innovations propagation. Since users’
behavior has influenced by other users’ activity, some groups of people would be made regard to similarity of users’
interests. On the other hand, dealing with many events in real worlds, can be justified in social networks; spreading
disease is one instance of them. People’s manner and infection severity are more important parameters in
dissemination of diseases. Both of these reasons derive, whether the diffusion leads to an epidemic or not. SIRS is a
hybrid model of SIR and SIS disease models to spread contamination. A person in this model can be returned to
susceptible state after it removed. According to communities which are established on the social network, we use the
compartmental type of SIRS model. During this paper, a general compartmental information diffusion model would
be proposed and extracted some of the beneficial parameters to analyze our model. To adapt our model to realistic
behaviors, we use Markovian model, which would be helpful to create a stochastic manner of the proposed model.
In the case of random model, we can calculate probabilities of transaction between states and predicting value of
each state. The comparison between two mode of the model shows that, the prediction of population would be
verified in each state.
Social Media Datasets for Analysis and Modeling Drug Usageijtsrd
This paper based on the research carried out in the area of data mining depends for managing bulk amount of data with mining in social media on using composite applications for performing more sophisticated analysis. Enhancement of social media may address this need. The objective of this paper is to introduce such type of tool which used in social network to characterised Medicine Usage. This paper outlined a structured approach to analyse social media in order to capture emerging trends in medicine abuse by applying powerful methods like Machine Learning. This paper describes how to fetch important data for analysis from social network. Then big data techniques to extract useful content for analysis are discussed. Sindhu S. B | Dr. B. N Veerappa "Social Media Datasets for Analysis and Modeling Drug Usage" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd25246.pdfPaper URL: https://www.ijtsrd.com/engineering/computer-engineering/25246/social-media-datasets-for-analysis-and-modeling-drug-usage/sindhu-s-b
Supervised Multi Attribute Gene Manipulation For Cancerpaperpublications3
Abstract: Data mining, the extraction of hidden predictive information from large databases, is a powerful new technology with great potential to help companies focus on the most important information in their data warehouses. Data mining tools predict future trends and behaviours, allowing businesses to make proactive, knowledge-driven decisions. The automated, prospective analyses offered by data mining move beyond the analyses of past events provided by retrospective tools typical of decision support systems.
They scour databases for hidden patterns, finding predictive information that experts may miss because it lies outside their expectations. Data mining techniques are the result of a long process of research and product development. This evolution began when business data was first stored on computers, continued with improvements in data access, and more recently, generated technologies that allow users to navigate through their data in real time. Data mining takes this evolutionary process beyond retrospective data access and navigation to prospective and proactive information delivery.
BINARY TEXT CLASSIFICATION OF CYBER HARASSMENT USING DEEP LEARNINGIRJET Journal
This document discusses the development of a cyberharassment detection system to identify abusive content on social media platforms. It reviews related works that have used machine learning techniques like convolutional neural networks and transfer learning models to detect cyberbullying. The authors investigate four neural network optimizers - Rmsprop, Adam, Adadelta, and Adagrad - and find that Rmsprop achieved the highest accuracy of 98.45% at classifying harassing content. The goal of this research is to create an effective model for automatically detecting cyberharassment online.
Predicting user behavior using data profiling and hidden Markov modelIJECEIAES
Mental health disorders affect many aspects of patient’s lives, including emotions, cognition, and especially behaviors. E-health technology helps to collect information wealth in a non-invasive manner, which represents a promising opportunity to construct health behavior markers. Combining such user behavior data can provide a more comprehensive and contextual view than questionnaire data. Due to behavioral data, we can train machine learning models to understand the data pattern and also use prediction algorithms to know the next state of a person’s behavior. The remaining challenges for this issue are how to apply mathematical formulations to textual datasets and find metadata that aids to identify the person’s life pattern and also predict the next state of his comportment. The main idea of this work is to use a hidden Markov model (HMM) to predict user behavior from social media applications by analyzing and detecting states and symbols from the user behavior dataset. To achieve this goal, we need to analyze and detect the states and symbols from the user behavior dataset, then convert the textual data to mathematical and numerical matrices. Finally, apply the HMM model to predict the hidden user behavior states. We tested our program and identified that the log-likelihood was higher and better when the model fits the data. In any case, the results of the study indicated that the program was suitable for the purpose and yielded valuable data.
Collusion-resistant multiparty data sharing in social networksIJECEIAES
The number of users on online social networks (OSNs) has grown tremendously over the past few years, with sites like Facebook amassing over a billion users. With the popularity of OSNs, the increase in privacy risk from the large volume of sensitive and private data is inevitable. While there are many features for access control for an individual user, most OSNs still need concrete mechanisms to preserve the privacy of data shared between multiple users. The proposed method uses metrics such as identity leakage (IL) and strength of interaction (SoI) to fine-tune the scenarios that use privacy risk and sharing loss to identify and resolve conflicts. In addition to conflict resolution, bot detection is also done to mitigate collusion attacks. The final decision to share the data item is then ascertained based on whether it passes the threshold condition for the above metrics.
Intelligent analysis of the effect of internetIJCI JOURNAL
This paper analyzes the effect of Information Technology on professionals, academicians and students in
perspective of their relations, education, job purpose, health, entertainment and electronic business that
can bring changes in society. Technology can have both positive and negative consequences for people of
different walks of life at different times. The need is to understand the true impact of internet on the society
so that people can start thinking and build a healthy society. In this paper, an empirical study is considered
60 persons; a causal loop model is formed relating the parameters on the basis of data collected. These
parameters are used to form the fuzzy dynamic model to analyze the effect of the internet on the society.
The model is analyzed and suitable solutions are proposed to counter the negative effect of internet on our
society.
Social networking sites are a significant source of information to know the behavior of users and to know
what is occupying society of all ages and accordingly helpful information can be provided to specialists
and decision-makers. According to official sources, 98.43% of Saudi youth use social networking sites. The
study and analysis of social media data are done to provide the necessary information to increase
investment opportunities within the Kingdom of Saudi Arabia, by studying and analyzing what people
occupy on the communication sites through their tweets about the labor market and investment. Given the
huge volume of data and also its randomness, a survey of the data will be done and collected from through
keywords, the priority of arranging the data, and recording it as (positive - negative - mixed). The study
analysis and conclusion will be based on data-mining and its techniques of analysis and deduction
.
INCREASING THE INVESTMENT’S OPPORTUNITIES IN KINGDOM OF SAUDI ARABIA BY STUDY...ijcsit
Social networking sites are a significant source of information to know the behavior of users and to know
what is occupying society of all ages and accordingly helpful information can be provided to specialists
and decision-makers. According to official sources, 98.43% of Saudi youth use social networking sites. The
study and analysis of social media data are done to provide the necessary information to increase
investment opportunities within the Kingdom of Saudi Arabia, by studying and analyzing what people
occupy on the communication sites through their tweets about the labor market and investment. Given the
huge volume of data and also its randomness, a survey of the data will be done and collected from through
keywords, the priority of arranging the data, and recording it as (positive - negative - mixed). The study
analysis and conclusion will be based on data-mining and its techniques of analysis and deduction.
An updated look at social network extraction system a personal data analysis ...eSAT Publishing House
This document summarizes a study on analyzing personal social network data over time. The study extracted data from Facebook, calculated social network analysis metrics like degree distribution and betweenness centrality, and analyzed how the network changed dynamically over time. Key findings included identifying influential and non-influential users, detecting communities that formed within the network, and identifying the celebrity or most influential user within one person's local network. Analyzing how social networks and interactions change dynamically provides insights useful for applications like marketing and recommendations.
The document discusses sampling techniques for online social networks. It proposes using an outlier indexing algorithm to sample large datasets from social networks. The key advantages of this approach are that random samples can be used for a wide range of analytical tasks and outlier detection. The paper also reviews related literature on estimating search tree sizes and sampling nodes in social networks. It then presents the proposed outlier indexing sampling algorithm for compressing social network structure and interest correlations across users.
IRJET- Event Detection and Text Summary by Disaster WarningIRJET Journal
The document proposes two models: 1) The Hot Event Evolution (HEE) model which uses short text data and user interests to detect and track the evolution of hot events on social media. 2) The IncreSTS (Incremental Short Text Summarization) algorithm which can incrementally cluster and summarize comment streams on social networks. The HEE model improves on existing event detection methods by considering how user interests change during event evolution. The IncreSTS algorithm aims to help users understand comment streams in real-time without reading all comments. Both models were found to achieve high efficiency, accuracy and scalability.
New prediction method for data spreading in social networks based on machine ...TELKOMNIKA JOURNAL
Information diffusion prediction is the study of the path of dissemination of news, information, or topics in a structured data such as a graph. Research in this area is focused on two goals, tracing the information diffusion path and finding the members that determine future the next path. The major problem of traditional approaches in this area is the use of simple probabilistic methods rather than intelligent methods. Recent years have seen growing interest in the use of machine learning algorithms in this field. Recently, deep learning, which is a branch of machine learning, has been increasingly used in the field of information diffusion prediction. This paper presents a machine learning method based on the graph neural network algorithm, which involves the selection of inactive vertices for activation based on the neighboring vertices that are active in a given scientific topic. Basically, in this method, information diffusion paths are predicted through the activation of inactive vertices byactive vertices. The method is tested on three scientific bibliography datasets: The Digital Bibliography and Library Project (DBLP), Pubmed, and Cora. The method attempts to answer the question that who will be the publisher of thenext article in a specific field of science. The comparison of the proposed method with other methods shows 10% and 5% improved precision in DBL Pand Pubmed datasets, respectively.
Social network analysis is a method of big data analysis which reveals the nature
of connections between objects, including implicit connections. This is a tool of interest
since it can be applied to large data sets, manual processing of which is very laborintensive,
while automated processing through self-learning linguistic engines requires
a lot of resources. In this regard a study was carried out: it was aimed at development
and testing of social network analysis tools and creating a research algorithm which is
applicable to solve a wide range of analytical and search tasks. The current image of
Russia and its activities in the Arctic was chosen as a case.
The research algorithm helps to discover implicit patterns and trends, relate
information flows and events with relevant newsworthy events and news stories to form
a “clear” view of the study object and key actors which this object is associated with.
The work contributes to filling the gap in scientific literature, caused by insufficient
development of applied issues of using social network analysis to solve managerial
tasks, while theoretical papers, which describe the theory and methodology of such an
analysis, are abundant.
The document describes a study that compared manual and computational thematic analyses of online comments about vaccine hesitancy conducted by teams of public health researchers. The researchers provided one team traditional tools for their analysis and the other team used the Computational Thematic Analysis Toolkit. Both teams independently analyzed the same large dataset of over 600,000 online comments. The researchers then compared the processes and results of the two analyses. They found that while the teams followed different processes, their analyses produced similar overlapping themes. The toolkit enabled researchers without programming skills to conduct computational analysis and facilitated working with large datasets, but also influenced their research process.
Terrorism Analysis through Social Media using Data MiningIRJET Journal
This document presents a study that uses deep learning models like Deep Neural Networks (DNN) and Convolutional Neural Networks (CNN) to analyze terrorism through detecting toxicity in social media text data. The study aims to classify text data into categories like toxicity, severe toxicity, obscenity, threat, insult or identity hate. It provides an overview of DNN and CNN models for text classification and compares their methodology, architecture and performance. The models are trained on preprocessed social media data related to terrorist activities and aim to accurately predict the toxicity level and classify tweets for concerned authorities to make informed decisions.
Root cause analysis of COVID-19 cases by enhanced text mining processIJECEIAES
The main focus of this research is to find the reasons behind the fresh cases of COVID-19 from the public’s perception for data specific to India. The analysis is done using machine learning approaches and validating the inferences with medical professionals. The data processing and analysis is accomplished in three steps. First, the dimensionality of the vector space model (VSM) is reduced with improvised feature engineering (FE) process by using a weighted term frequency-inverse document frequency (TF-IDF) and forward scan trigrams (FST) followed by removal of weak features using feature hashing technique. In the second step, an enhanced K-means clustering algorithm is used for grouping, based on the public posts from Twitter®. In the last step, latent dirichlet allocation (LDA) is applied for discovering the trigram topics relevant to the reasons behind the increase of fresh COVID-19 cases. The enhanced K-means clustering improved Dunn index value by 18.11% when compared with the traditional K-means method. By incorporating improvised two-step FE process, LDA model improved by 14% in terms of coherence score and by 19% and 15% when compared with latent semantic analysis (LSA) and hierarchical dirichlet process (HDP) respectively thereby resulting in 14 root causes for spike in the disease.
An Overview on the Use of Data Mining and Linguistics Techniques for Building...ijcsit
The usage of Online Social Networks (OSN), such as Facebook and Twitter are becoming more and more
popular in order to exchange and disseminate news and information in real-time. Twitter in particular
allows the instant dissemination of short messages in the form of microblogs to followers. This Survey
reviews literature to explore and examine the usage of how OSNs, such as the microblogging tool Twitter,
can help in the detection of spreading epidemics. The paper highlights significant challenges in the field of
Natural Language Processing (NLP) when using microblog based Early Disease Detection Systems. For
instance, microblogging data is an unstructured collection of short messages (140 characters in Twitter),
with noise and non-standard use of the English language. Hence, research is currently exploring the field
of linguistics in order to determine the semantics of the text and uses data mining techniques in order to
extract useful information for disease spread detection. Furthermore, the survey discusses applications and
existing early disease detection systems based on OSNs and outlines directions for future research on
improving such systems based on a combination of linguistics methods, data mining techniques and
recommendation systems.
A MACHINE LEARNING ENSEMBLE MODEL FOR THE DETECTION OF CYBERBULLYINGijaia
The pervasive use of social media platforms, such as Facebook, Instagram, and X, has significantly amplified
our electronic interconnectedness. Moreover, these platforms are now easily accessible from any location at
any given time. However, the increased popularity of social media has also led to cyberbullying.It is imperative
to address the need for finding, monitoring, and mitigating cyberbullying posts on social media platforms.
Motivated by this necessity, we present this paper to contribute to developing an automated system for
detecting binary labels of aggressive tweets.Our study has demonstrated remarkable performance compared to
previous experiments on the same dataset. We employed the stacking ensemble machine learning method,
utilizing four various feature extraction techniques to optimize performance within the stacking ensemble
learning framework. Combining five machine learning algorithms,Decision Trees, Random Forest, Linear
Support Vector Classification, Logistic Regression, and K-Nearest Neighbors into an ensemble method, we
achieved superior results compared to traditional machine learning classifier models. The stacking classifier
achieved a high accuracy rate of 94.00%, outperforming traditional machine learning models and surpassing
the results of prior experiments that utilized the same dataset. The outcomes of our experiments showcased an
accuracy rate of 0.94% in detection tweets as aggressive or non-aggressive.
A Machine Learning Ensemble Model for the Detection of Cyberbullyinggerogepatton
The pervasive use of social media platforms, such as Facebook, Instagram, and X, has significantly amplified
our electronic interconnectedness. Moreover, these platforms are now easily accessible from any location at
any given time. However, the increased popularity of social media has also led to cyberbullying.It is imperative
to address the need for finding, monitoring, and mitigating cyberbullying posts on social media platforms.
Motivated by this necessity, we present this paper to contribute to developing an automated system for
detecting binary labels of aggressive tweets.Our study has demonstrated remarkable performance compared to
previous experiments on the same dataset. We employed the stacking ensemble machine learning method,
utilizing four various feature extraction techniques to optimize performance within the stacking ensemble
learning framework. Combining five machine learning algorithms,Decision Trees, Random Forest, Linear
Support Vector Classification, Logistic Regression, and K-Nearest Neighbors into an ensemble method, we
achieved superior results compared to traditional machine learning classifier models. The stacking classifier
achieved a high accuracy rate of 94.00%, outperforming traditional machine learning models and surpassing
the results of prior experiments that utilized the same dataset. The outcomes of our experiments showcased an
accuracy rate of 0.94% in detection tweets as aggressive or non-aggressive.
ABSTRACT : Computational social science (CSS) is an academic discipline that combines the traditional social sciences with computer science. While social scientists provide research questions, data sources, and acquisition methods, computer scientists contribute mathematical models and computational tools. CSS uses computationally methods and statistical tools to analyze and model social phenomena, social structures, and human social behavior. The purpose of this paper is to provide a brief introduction to computational social science.
Key Words: computational social science, social-computational systems, social simulation models, agent-based models
A comprehensive study on disease risk predictions in machine learning IJECEIAES
Over recent years, multiple disease risk prediction models have been developed. These models use various patient characteristics to estimate the probability of outcomes over a certain period of time and hold the potential to improve decision making and individualize care. Discovering hidden patterns and interactions from medical databases with growing evaluation of the disease prediction model has become crucial. It needs many trials in traditional clinical findings that could complicate disease prediction. A Comprehensive study on different strategies used to predict disease is conferred in this paper. Applying these techniques to healthcare data, has improvement of risk prediction models to find out the patients who would get benefit from disease management programs to reduce hospital readmission and healthcare cost, but the results of these endeavors have been shifted.
This document discusses data mining algorithms for clustering healthcare data streams. It provides an overview of the K-means and D-stream algorithms, and proposes a framework for comparing them on healthcare datasets. The framework involves feature extraction from physiological signals, calculating risk components, and applying the K-means and D-stream algorithms to cluster the data. The results would show the effectiveness and limitations of each algorithm for clustering streaming healthcare data.
Depression and anxiety detection through the Closed-Loop method using DASS-21TELKOMNIKA JOURNAL
The change of information and communication technology has brought many changes in daily
life. The way humans interacting is changing. It is possible to express each form of communication directly
and instantly. Social media has contributed data in size, diversity and capacity and quality. Based on it,
the idea was to see and measure the tendency of depression and anxiety through social media using
the Closed-Loop method using Facebook text mining posts. Through the stages of pre-processing
including text extraction using the Naïve Bayes machine learning model for text classification, the early
signs of depression and anxiety are measured using DASS-21 parameter. In total, 22,934 Facebook posts
were contributed as training and learning data collected from July 2017 until July 2018. As a results,
analysis and mapping of social demographics of users that are usually as a trigger of depression, and
anxiety, such as grief, illness, household affairs, children education and others are available.
Framework for A Personalized Intelligent Assistant to Elderly People for Acti...CSCJournals
The increasing population of elderly people is associated with the need to meet their increasing requirements and to provide solutions that can improve their quality of life in a smart home. In addition to fear and anxiety towards interfacing with systems; cognitive disabilities, weakened memory, disorganized behavior and even physical limitations are some of the problems that elderly people tend to face with increasing age. The essence of providing technology-based solutions to address these needs of elderly people and to create smart and assisted living spaces for the elderly; lies in developing systems that can adapt by addressing their diversity and can augment their performances in the context of their day to day goals. Therefore, this work proposes a framework for development of a Personalized Intelligent Assistant to help elderly people perform Activities of Daily Living (ADLs) in a smart and connected Internet of Things (IoT) based environment. This Personalized Intelligent Assistant can analyze different tasks performed by the user and recommend activities by considering their daily routine, current affective state and the underlining user experience. To uphold the efficacy of this proposed framework, it has been tested on a couple of datasets for modelling an "average user" and a "specific user" respectively. The results presented show that the model achieves a performance accuracy of 73.12% when modelling a "specific user", which is considerably higher than its performance while modelling an "average user", this upholds the relevance for development and implementation of this proposed framework.
A Machine Learning Ensemble Model for the Detection of Cyberbullyinggerogepatton
The pervasive use of social media platforms, such as Facebook, Instagram, and X, has significantly amplified
our electronic interconnectedness. Moreover, these platforms are now easily accessible from any location at
any given time. However, the increased popularity of social media has also led to cyberbullying.It is imperative
to address the need for finding, monitoring, and mitigating cyberbullying posts on social media platforms.
Motivated by this necessity, we present this paper to contribute to developing an automated system for
detecting binary labels of aggressive tweets.Our study has demonstrated remarkable performance compared to
previous experiments on the same dataset. We employed the stacking ensemble machine learning method,
utilizing four various feature extraction techniques to optimize performance within the stacking ensemble
learning framework. Combining five machine learning algorithms,Decision Trees, Random Forest, Linear
Support Vector Classification, Logistic Regression, and K-Nearest Neighbors into an ensemble method, we
achieved superior results compared to traditional machine learning classifier models. The stacking classifier
achieved a high accuracy rate of 94.00%, outperforming traditional machine learning models and surpassing
the results of prior experiments that utilized the same dataset. The outcomes of our experiments showcased an
accuracy rate of 0.94% in detection tweets as aggressive or non-aggressive.
Improving cyberbullying detection through multi-level machine learningIJECEIAES
Cyberbullying is a known risk factor for mental health issues, demanding immediate attention. This study aims to detect cyberbullying on social media in alignment with the third sustainable development goal (SDG) for health and well-being. Many previous studies employ single-level classification, but this research introduces a multi-class multi-level (MCML) algorithm for a more detailed approach. The MCML approach incorporates two levels of classification: level one for cyberbullying or not cyberbullying, and level two for classifying cyberbullying by type. This study used a dataset of 47,000 tweets from Twitter with six class labels and employed an 80:20 training and testing data split. By integrating bidirectional encoder representations from transformers (BERT) and MCML at level two, we achieved a remarkable 99% accuracy, surpassing BERT-based single-level classification at 94%. In conclusion, the combination of MCML and BERT offers enhanced cyberbullying classification accuracy, contributing to the broader goal of promoting mental health and well-being.
Towards Decision Support and Goal AchievementIdentifying Ac.docxturveycharlyn
Towards Decision Support and Goal Achievement:
Identifying Action-Outcome Relationships From Social
Media
Emre Kıcıman
Microsoft Research
[email protected]
Matthew Richardson
Microsoft Research
[email protected]
ABSTRACT
Every day, people take actions, trying to achieve their per-
sonal, high-order goals. People decide what actions to take
based on their personal experience, knowledge and gut in-
stinct. While this leads to positive outcomes for some peo-
ple, many others do not have the necessary experience, knowl-
edge and instinct to make good decisions. What if, rather
than making decisions based solely on their own personal
experience, people could take advantage of the reported ex-
periences of hundreds of millions of other people?
In this paper, we investigate the feasibility of mining the
relationship between actions and their outcomes from the
aggregated timelines of individuals posting experiential mi-
croblog reports. Our contributions include an architecture
for extracting action-outcome relationships from social me-
dia data, techniques for identifying experiential social media
messages and converting them to event timelines, and an
analysis and evaluation of action-outcome extraction in case
studies.
1. INTRODUCTION
While current structured knowledge bases (e.g., Freebase)
contain a sizeable collection of information about entities,
from celebrities and locations to concepts and common ob-
jects, there is a class of knowledge that has minimal cov-
erage: actions. Simple information about common actions,
such as the effect of eating pasta before running a marathon,
or the consequences of adopting a puppy, are missing. While
some of this information may be found within the free text of
Wikipedia articles, the lack of a structured or semi-structured
representation make it largely unavailable for computational
usage. With computing devices continuing to become more
embedded in our everyday lives, and mediating an increasing
degree of our interactions with both the digital and physical
world, knowledge bases that can enable our computing de-
vices to represent and evaluate actions and their likely out-
comes can help individuals reason about actions and their
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than the
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from [email protected]
KDD’15, August 10-13, 2015, Sydney, NSW, Australia.
Copyright is held by the owner/author(s). Publication rights licensed to ACM.
ACM 978-1-4503-3664-2/15/08 ...$15.00.
DOI: http://dx.doi.org/10.1145 ...
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Contenu connexe
Similaire à Assessment of the main features of the model of dissemination of information in social networks
INCREASING THE INVESTMENT’S OPPORTUNITIES IN KINGDOM OF SAUDI ARABIA BY STUDY...ijcsit
Social networking sites are a significant source of information to know the behavior of users and to know
what is occupying society of all ages and accordingly helpful information can be provided to specialists
and decision-makers. According to official sources, 98.43% of Saudi youth use social networking sites. The
study and analysis of social media data are done to provide the necessary information to increase
investment opportunities within the Kingdom of Saudi Arabia, by studying and analyzing what people
occupy on the communication sites through their tweets about the labor market and investment. Given the
huge volume of data and also its randomness, a survey of the data will be done and collected from through
keywords, the priority of arranging the data, and recording it as (positive - negative - mixed). The study
analysis and conclusion will be based on data-mining and its techniques of analysis and deduction.
An updated look at social network extraction system a personal data analysis ...eSAT Publishing House
This document summarizes a study on analyzing personal social network data over time. The study extracted data from Facebook, calculated social network analysis metrics like degree distribution and betweenness centrality, and analyzed how the network changed dynamically over time. Key findings included identifying influential and non-influential users, detecting communities that formed within the network, and identifying the celebrity or most influential user within one person's local network. Analyzing how social networks and interactions change dynamically provides insights useful for applications like marketing and recommendations.
The document discusses sampling techniques for online social networks. It proposes using an outlier indexing algorithm to sample large datasets from social networks. The key advantages of this approach are that random samples can be used for a wide range of analytical tasks and outlier detection. The paper also reviews related literature on estimating search tree sizes and sampling nodes in social networks. It then presents the proposed outlier indexing sampling algorithm for compressing social network structure and interest correlations across users.
IRJET- Event Detection and Text Summary by Disaster WarningIRJET Journal
The document proposes two models: 1) The Hot Event Evolution (HEE) model which uses short text data and user interests to detect and track the evolution of hot events on social media. 2) The IncreSTS (Incremental Short Text Summarization) algorithm which can incrementally cluster and summarize comment streams on social networks. The HEE model improves on existing event detection methods by considering how user interests change during event evolution. The IncreSTS algorithm aims to help users understand comment streams in real-time without reading all comments. Both models were found to achieve high efficiency, accuracy and scalability.
New prediction method for data spreading in social networks based on machine ...TELKOMNIKA JOURNAL
Information diffusion prediction is the study of the path of dissemination of news, information, or topics in a structured data such as a graph. Research in this area is focused on two goals, tracing the information diffusion path and finding the members that determine future the next path. The major problem of traditional approaches in this area is the use of simple probabilistic methods rather than intelligent methods. Recent years have seen growing interest in the use of machine learning algorithms in this field. Recently, deep learning, which is a branch of machine learning, has been increasingly used in the field of information diffusion prediction. This paper presents a machine learning method based on the graph neural network algorithm, which involves the selection of inactive vertices for activation based on the neighboring vertices that are active in a given scientific topic. Basically, in this method, information diffusion paths are predicted through the activation of inactive vertices byactive vertices. The method is tested on three scientific bibliography datasets: The Digital Bibliography and Library Project (DBLP), Pubmed, and Cora. The method attempts to answer the question that who will be the publisher of thenext article in a specific field of science. The comparison of the proposed method with other methods shows 10% and 5% improved precision in DBL Pand Pubmed datasets, respectively.
Social network analysis is a method of big data analysis which reveals the nature
of connections between objects, including implicit connections. This is a tool of interest
since it can be applied to large data sets, manual processing of which is very laborintensive,
while automated processing through self-learning linguistic engines requires
a lot of resources. In this regard a study was carried out: it was aimed at development
and testing of social network analysis tools and creating a research algorithm which is
applicable to solve a wide range of analytical and search tasks. The current image of
Russia and its activities in the Arctic was chosen as a case.
The research algorithm helps to discover implicit patterns and trends, relate
information flows and events with relevant newsworthy events and news stories to form
a “clear” view of the study object and key actors which this object is associated with.
The work contributes to filling the gap in scientific literature, caused by insufficient
development of applied issues of using social network analysis to solve managerial
tasks, while theoretical papers, which describe the theory and methodology of such an
analysis, are abundant.
The document describes a study that compared manual and computational thematic analyses of online comments about vaccine hesitancy conducted by teams of public health researchers. The researchers provided one team traditional tools for their analysis and the other team used the Computational Thematic Analysis Toolkit. Both teams independently analyzed the same large dataset of over 600,000 online comments. The researchers then compared the processes and results of the two analyses. They found that while the teams followed different processes, their analyses produced similar overlapping themes. The toolkit enabled researchers without programming skills to conduct computational analysis and facilitated working with large datasets, but also influenced their research process.
Terrorism Analysis through Social Media using Data MiningIRJET Journal
This document presents a study that uses deep learning models like Deep Neural Networks (DNN) and Convolutional Neural Networks (CNN) to analyze terrorism through detecting toxicity in social media text data. The study aims to classify text data into categories like toxicity, severe toxicity, obscenity, threat, insult or identity hate. It provides an overview of DNN and CNN models for text classification and compares their methodology, architecture and performance. The models are trained on preprocessed social media data related to terrorist activities and aim to accurately predict the toxicity level and classify tweets for concerned authorities to make informed decisions.
Root cause analysis of COVID-19 cases by enhanced text mining processIJECEIAES
The main focus of this research is to find the reasons behind the fresh cases of COVID-19 from the public’s perception for data specific to India. The analysis is done using machine learning approaches and validating the inferences with medical professionals. The data processing and analysis is accomplished in three steps. First, the dimensionality of the vector space model (VSM) is reduced with improvised feature engineering (FE) process by using a weighted term frequency-inverse document frequency (TF-IDF) and forward scan trigrams (FST) followed by removal of weak features using feature hashing technique. In the second step, an enhanced K-means clustering algorithm is used for grouping, based on the public posts from Twitter®. In the last step, latent dirichlet allocation (LDA) is applied for discovering the trigram topics relevant to the reasons behind the increase of fresh COVID-19 cases. The enhanced K-means clustering improved Dunn index value by 18.11% when compared with the traditional K-means method. By incorporating improvised two-step FE process, LDA model improved by 14% in terms of coherence score and by 19% and 15% when compared with latent semantic analysis (LSA) and hierarchical dirichlet process (HDP) respectively thereby resulting in 14 root causes for spike in the disease.
An Overview on the Use of Data Mining and Linguistics Techniques for Building...ijcsit
The usage of Online Social Networks (OSN), such as Facebook and Twitter are becoming more and more
popular in order to exchange and disseminate news and information in real-time. Twitter in particular
allows the instant dissemination of short messages in the form of microblogs to followers. This Survey
reviews literature to explore and examine the usage of how OSNs, such as the microblogging tool Twitter,
can help in the detection of spreading epidemics. The paper highlights significant challenges in the field of
Natural Language Processing (NLP) when using microblog based Early Disease Detection Systems. For
instance, microblogging data is an unstructured collection of short messages (140 characters in Twitter),
with noise and non-standard use of the English language. Hence, research is currently exploring the field
of linguistics in order to determine the semantics of the text and uses data mining techniques in order to
extract useful information for disease spread detection. Furthermore, the survey discusses applications and
existing early disease detection systems based on OSNs and outlines directions for future research on
improving such systems based on a combination of linguistics methods, data mining techniques and
recommendation systems.
A MACHINE LEARNING ENSEMBLE MODEL FOR THE DETECTION OF CYBERBULLYINGijaia
The pervasive use of social media platforms, such as Facebook, Instagram, and X, has significantly amplified
our electronic interconnectedness. Moreover, these platforms are now easily accessible from any location at
any given time. However, the increased popularity of social media has also led to cyberbullying.It is imperative
to address the need for finding, monitoring, and mitigating cyberbullying posts on social media platforms.
Motivated by this necessity, we present this paper to contribute to developing an automated system for
detecting binary labels of aggressive tweets.Our study has demonstrated remarkable performance compared to
previous experiments on the same dataset. We employed the stacking ensemble machine learning method,
utilizing four various feature extraction techniques to optimize performance within the stacking ensemble
learning framework. Combining five machine learning algorithms,Decision Trees, Random Forest, Linear
Support Vector Classification, Logistic Regression, and K-Nearest Neighbors into an ensemble method, we
achieved superior results compared to traditional machine learning classifier models. The stacking classifier
achieved a high accuracy rate of 94.00%, outperforming traditional machine learning models and surpassing
the results of prior experiments that utilized the same dataset. The outcomes of our experiments showcased an
accuracy rate of 0.94% in detection tweets as aggressive or non-aggressive.
A Machine Learning Ensemble Model for the Detection of Cyberbullyinggerogepatton
The pervasive use of social media platforms, such as Facebook, Instagram, and X, has significantly amplified
our electronic interconnectedness. Moreover, these platforms are now easily accessible from any location at
any given time. However, the increased popularity of social media has also led to cyberbullying.It is imperative
to address the need for finding, monitoring, and mitigating cyberbullying posts on social media platforms.
Motivated by this necessity, we present this paper to contribute to developing an automated system for
detecting binary labels of aggressive tweets.Our study has demonstrated remarkable performance compared to
previous experiments on the same dataset. We employed the stacking ensemble machine learning method,
utilizing four various feature extraction techniques to optimize performance within the stacking ensemble
learning framework. Combining five machine learning algorithms,Decision Trees, Random Forest, Linear
Support Vector Classification, Logistic Regression, and K-Nearest Neighbors into an ensemble method, we
achieved superior results compared to traditional machine learning classifier models. The stacking classifier
achieved a high accuracy rate of 94.00%, outperforming traditional machine learning models and surpassing
the results of prior experiments that utilized the same dataset. The outcomes of our experiments showcased an
accuracy rate of 0.94% in detection tweets as aggressive or non-aggressive.
ABSTRACT : Computational social science (CSS) is an academic discipline that combines the traditional social sciences with computer science. While social scientists provide research questions, data sources, and acquisition methods, computer scientists contribute mathematical models and computational tools. CSS uses computationally methods and statistical tools to analyze and model social phenomena, social structures, and human social behavior. The purpose of this paper is to provide a brief introduction to computational social science.
Key Words: computational social science, social-computational systems, social simulation models, agent-based models
A comprehensive study on disease risk predictions in machine learning IJECEIAES
Over recent years, multiple disease risk prediction models have been developed. These models use various patient characteristics to estimate the probability of outcomes over a certain period of time and hold the potential to improve decision making and individualize care. Discovering hidden patterns and interactions from medical databases with growing evaluation of the disease prediction model has become crucial. It needs many trials in traditional clinical findings that could complicate disease prediction. A Comprehensive study on different strategies used to predict disease is conferred in this paper. Applying these techniques to healthcare data, has improvement of risk prediction models to find out the patients who would get benefit from disease management programs to reduce hospital readmission and healthcare cost, but the results of these endeavors have been shifted.
This document discusses data mining algorithms for clustering healthcare data streams. It provides an overview of the K-means and D-stream algorithms, and proposes a framework for comparing them on healthcare datasets. The framework involves feature extraction from physiological signals, calculating risk components, and applying the K-means and D-stream algorithms to cluster the data. The results would show the effectiveness and limitations of each algorithm for clustering streaming healthcare data.
Depression and anxiety detection through the Closed-Loop method using DASS-21TELKOMNIKA JOURNAL
The change of information and communication technology has brought many changes in daily
life. The way humans interacting is changing. It is possible to express each form of communication directly
and instantly. Social media has contributed data in size, diversity and capacity and quality. Based on it,
the idea was to see and measure the tendency of depression and anxiety through social media using
the Closed-Loop method using Facebook text mining posts. Through the stages of pre-processing
including text extraction using the Naïve Bayes machine learning model for text classification, the early
signs of depression and anxiety are measured using DASS-21 parameter. In total, 22,934 Facebook posts
were contributed as training and learning data collected from July 2017 until July 2018. As a results,
analysis and mapping of social demographics of users that are usually as a trigger of depression, and
anxiety, such as grief, illness, household affairs, children education and others are available.
Framework for A Personalized Intelligent Assistant to Elderly People for Acti...CSCJournals
The increasing population of elderly people is associated with the need to meet their increasing requirements and to provide solutions that can improve their quality of life in a smart home. In addition to fear and anxiety towards interfacing with systems; cognitive disabilities, weakened memory, disorganized behavior and even physical limitations are some of the problems that elderly people tend to face with increasing age. The essence of providing technology-based solutions to address these needs of elderly people and to create smart and assisted living spaces for the elderly; lies in developing systems that can adapt by addressing their diversity and can augment their performances in the context of their day to day goals. Therefore, this work proposes a framework for development of a Personalized Intelligent Assistant to help elderly people perform Activities of Daily Living (ADLs) in a smart and connected Internet of Things (IoT) based environment. This Personalized Intelligent Assistant can analyze different tasks performed by the user and recommend activities by considering their daily routine, current affective state and the underlining user experience. To uphold the efficacy of this proposed framework, it has been tested on a couple of datasets for modelling an "average user" and a "specific user" respectively. The results presented show that the model achieves a performance accuracy of 73.12% when modelling a "specific user", which is considerably higher than its performance while modelling an "average user", this upholds the relevance for development and implementation of this proposed framework.
A Machine Learning Ensemble Model for the Detection of Cyberbullyinggerogepatton
The pervasive use of social media platforms, such as Facebook, Instagram, and X, has significantly amplified
our electronic interconnectedness. Moreover, these platforms are now easily accessible from any location at
any given time. However, the increased popularity of social media has also led to cyberbullying.It is imperative
to address the need for finding, monitoring, and mitigating cyberbullying posts on social media platforms.
Motivated by this necessity, we present this paper to contribute to developing an automated system for
detecting binary labels of aggressive tweets.Our study has demonstrated remarkable performance compared to
previous experiments on the same dataset. We employed the stacking ensemble machine learning method,
utilizing four various feature extraction techniques to optimize performance within the stacking ensemble
learning framework. Combining five machine learning algorithms,Decision Trees, Random Forest, Linear
Support Vector Classification, Logistic Regression, and K-Nearest Neighbors into an ensemble method, we
achieved superior results compared to traditional machine learning classifier models. The stacking classifier
achieved a high accuracy rate of 94.00%, outperforming traditional machine learning models and surpassing
the results of prior experiments that utilized the same dataset. The outcomes of our experiments showcased an
accuracy rate of 0.94% in detection tweets as aggressive or non-aggressive.
Improving cyberbullying detection through multi-level machine learningIJECEIAES
Cyberbullying is a known risk factor for mental health issues, demanding immediate attention. This study aims to detect cyberbullying on social media in alignment with the third sustainable development goal (SDG) for health and well-being. Many previous studies employ single-level classification, but this research introduces a multi-class multi-level (MCML) algorithm for a more detailed approach. The MCML approach incorporates two levels of classification: level one for cyberbullying or not cyberbullying, and level two for classifying cyberbullying by type. This study used a dataset of 47,000 tweets from Twitter with six class labels and employed an 80:20 training and testing data split. By integrating bidirectional encoder representations from transformers (BERT) and MCML at level two, we achieved a remarkable 99% accuracy, surpassing BERT-based single-level classification at 94%. In conclusion, the combination of MCML and BERT offers enhanced cyberbullying classification accuracy, contributing to the broader goal of promoting mental health and well-being.
Towards Decision Support and Goal AchievementIdentifying Ac.docxturveycharlyn
Towards Decision Support and Goal Achievement:
Identifying Action-Outcome Relationships From Social
Media
Emre Kıcıman
Microsoft Research
[email protected]
Matthew Richardson
Microsoft Research
[email protected]
ABSTRACT
Every day, people take actions, trying to achieve their per-
sonal, high-order goals. People decide what actions to take
based on their personal experience, knowledge and gut in-
stinct. While this leads to positive outcomes for some peo-
ple, many others do not have the necessary experience, knowl-
edge and instinct to make good decisions. What if, rather
than making decisions based solely on their own personal
experience, people could take advantage of the reported ex-
periences of hundreds of millions of other people?
In this paper, we investigate the feasibility of mining the
relationship between actions and their outcomes from the
aggregated timelines of individuals posting experiential mi-
croblog reports. Our contributions include an architecture
for extracting action-outcome relationships from social me-
dia data, techniques for identifying experiential social media
messages and converting them to event timelines, and an
analysis and evaluation of action-outcome extraction in case
studies.
1. INTRODUCTION
While current structured knowledge bases (e.g., Freebase)
contain a sizeable collection of information about entities,
from celebrities and locations to concepts and common ob-
jects, there is a class of knowledge that has minimal cov-
erage: actions. Simple information about common actions,
such as the effect of eating pasta before running a marathon,
or the consequences of adopting a puppy, are missing. While
some of this information may be found within the free text of
Wikipedia articles, the lack of a structured or semi-structured
representation make it largely unavailable for computational
usage. With computing devices continuing to become more
embedded in our everyday lives, and mediating an increasing
degree of our interactions with both the digital and physical
world, knowledge bases that can enable our computing de-
vices to represent and evaluate actions and their likely out-
comes can help individuals reason about actions and their
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than the
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from [email protected]
KDD’15, August 10-13, 2015, Sydney, NSW, Australia.
Copyright is held by the owner/author(s). Publication rights licensed to ACM.
ACM 978-1-4503-3664-2/15/08 ...$15.00.
DOI: http://dx.doi.org/10.1145 ...
Similaire à Assessment of the main features of the model of dissemination of information in social networks (20)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Neural network optimizer of proportional-integral-differential controller par...IJECEIAES
Wide application of proportional-integral-differential (PID)-regulator in industry requires constant improvement of methods of its parameters adjustment. The paper deals with the issues of optimization of PID-regulator parameters with the use of neural network technology methods. A methodology for choosing the architecture (structure) of neural network optimizer is proposed, which consists in determining the number of layers, the number of neurons in each layer, as well as the form and type of activation function. Algorithms of neural network training based on the application of the method of minimizing the mismatch between the regulated value and the target value are developed. The method of back propagation of gradients is proposed to select the optimal training rate of neurons of the neural network. The neural network optimizer, which is a superstructure of the linear PID controller, allows increasing the regulation accuracy from 0.23 to 0.09, thus reducing the power consumption from 65% to 53%. The results of the conducted experiments allow us to conclude that the created neural superstructure may well become a prototype of an automatic voltage regulator (AVR)-type industrial controller for tuning the parameters of the PID controller.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
A review on features and methods of potential fishing zoneIJECEIAES
This review focuses on the importance of identifying potential fishing zones in seawater for sustainable fishing practices. It explores features like sea surface temperature (SST) and sea surface height (SSH), along with classification methods such as classifiers. The features like SST, SSH, and different classifiers used to classify the data, have been figured out in this review study. This study underscores the importance of examining potential fishing zones using advanced analytical techniques. It thoroughly explores the methodologies employed by researchers, covering both past and current approaches. The examination centers on data characteristics and the application of classification algorithms for classification of potential fishing zones. Furthermore, the prediction of potential fishing zones relies significantly on the effectiveness of classification algorithms. Previous research has assessed the performance of models like support vector machines, naïve Bayes, and artificial neural networks (ANN). In the previous result, the results of support vector machine (SVM) were 97.6% more accurate than naive Bayes's 94.2% to classify test data for fisheries classification. By considering the recent works in this area, several recommendations for future works are presented to further improve the performance of the potential fishing zone models, which is important to the fisheries community.
Electrical signal interference minimization using appropriate core material f...IJECEIAES
As demand for smaller, quicker, and more powerful devices rises, Moore's law is strictly followed. The industry has worked hard to make little devices that boost productivity. The goal is to optimize device density. Scientists are reducing connection delays to improve circuit performance. This helped them understand three-dimensional integrated circuit (3D IC) concepts, which stack active devices and create vertical connections to diminish latency and lower interconnects. Electrical involvement is a big worry with 3D integrates circuits. Researchers have developed and tested through silicon via (TSV) and substrates to decrease electrical wave involvement. This study illustrates a novel noise coupling reduction method using several electrical involvement models. A 22% drop in electrical involvement from wave-carrying to victim TSVs introduces this new paradigm and improves system performance even at higher THz frequencies.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
AI assisted telemedicine KIOSK for Rural India.pptx
Assessment of the main features of the model of dissemination of information in social networks
1. International Journal of Electrical and Computer Engineering (IJECE)
Vol. 13, No. 6, December 2023, pp. 6729~6736
ISSN: 2088-8708, DOI: 10.11591/ijece.v13i6.pp6729-6736 6729
Journal homepage: http://ijece.iaescore.com
Assessment of the main features of the model of dissemination
of information in social networks
Assel Imanberdi1
, La Lira1
, Kulmuratova Aitolkyn2
, Rzayeva Leila2
, Gulnara Abitova2
,
Bakiyeva Aigerim1
, Orynbayeva Ainur3
, Baimakhanbetova Assem3
1
Department of Information Systems, Faculty of Information Technology, L.N. Gumilyov Eurasian National University, Astana,
Republic of Kazakhstan
2
Department of Intelligent Systems and Cybersecurity, Astana IT University, Astana, Republic of Kazakhstan
3
Department of Biostatistics, Bioinformatics and Information Technologies, Astana Medical University, Astana, Republic of Kazakhstan
Article Info ABSTRACT
Article history:
Received Mar 24, 2023
Revised May 25, 2023
Accepted Jun 4, 2023
Social networks provide a fairly wide range of data that allows one way or
another to evaluate the effect of the dissemination of information. This article
presents the results of a study that describes methods for determining the key
parameters of the model needed to analyze and predict the dissemination of
information in social networks. An approach based on the analysis of
statistical data on user behavior in social networks is proposed. The process
of evaluating the main features of the model is described, including the
mathematical methods used for data analysis and information dissemination
modeling. The study aims to understand the processes of information
dissemination in social networks and develop recommendations for the
effective use of social networks as a communication and brand promotion
tool, as well as to consider the analytical properties of the classical
susceptible-infected-removed (SIR) model and evaluate its applicability to the
problem of information dissemination. The results of the study can be used to
create algorithms and techniques that will effectively manage the process of
information dissemination in social networks.
Keywords:
Cluster analysis
Information dissemination
Model parameters
Social networks
Susceptible-infected-removed
model
This is an open access article under the CC BY-SA license.
Corresponding Author:
Assel Imanberdi
Department of Information Systems, Faculty of Information Technology, L.N. Gumilyov Eurasian National
University
010000 Astana, Republic of Kazakhstan
Email: asel_khas@list.ru
1. INTRODUCTION
The study of information dissemination processes is becoming an increasingly important task every
year. This happens for several reasons: firstly, because of the importance of information as such in modern
society, and secondly, with the development of technological progress, including the improvement of means
of communication between people, which now cover almost the entire globe, it has become important
understand how this or that information is distributed [1]. The analysis of these processes allows us to predict
the reactions of certain groups of people to this or that information, and, therefore, it becomes possible to
develop strategies that allow us to work effectively with the audience, for wider coverage.
Also, social networks that have appeared quite recently are becoming more and more visited sites
every day, where people can spend a huge amount of free time, which in turn has led to the fact that most of
the previously consumed information from other sources, people now receive from social networks [2]. Thus,
the applied value in the development of an information dissemination model can lie in many areas at once, for
example, starting from the creation of effective marketing strategies for the development of some news sources,
2. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 6, December 2023: 6729-6736
6730
to the analysis of business processes that accompany this process and the analysis of communication systems
between people, and, therefore, and analysis of the acceptance of certain opinions in general [3]. Information
dissemination models [4] have several important factors that can affect their suitability for work. For example,
an important problem is that many of the parameters described in such models can be qualitative rather than
quantitative, which makes their use difficult, and besides, these parameters are rather difficult to formalize due
to their subjective nature. It is also necessary to remember that the processes in social networks, being only a
part of the processes on the internet, have a fairly high “impulsivity”, which also complicates the analysis and,
ultimately, can lead to a rejection of the simulated data and the real ones.
By themselves, the processes of dissemination of information are quite similar to the processes of the
spread of epidemics [5]. You can imagine a certain information unit as a virus that infects more and more
people over time, thanks to their communication with each other, the virus, in turn, also has a certain life span,
some group of people has immunity, and so on. Such parallels can be drawn long enough, but for a more
substantive description, they should be considered in the context of already existing methods [6]. To date, there
are several methods that describe these processes. The models built to analyze the dissemination of information
are based on susceptible-infected-removed (SIR) models of epidemics, due to the similarity of these two
processes [7]. However, determining the parameters of the information dissemination model is a complex
problem. First, to determine the parameters of the model, it is necessary to have reliable data on the infection
rate and the spread of information in time and space. Secondly, the model itself can have many parameters that
need to be adjusted for a specific epidemic.
Several attempts have been made to study the dissemination of information using traditional epidemic
models such as the susceptible-infected model, and the susceptible-infected-recovered model. Thus, in research
[8]–[10], epidemic models were proposed to study the spread processes in various social networks. Wang
et al. [11] propose an iterative algorithm for studying an identifiable system and a method for estimating
identifiable parameters. The method of least squares, based on a finite set of observations, helps the authors to
estimate the initial values of the parameters. Next, the authors test the proposed algorithm. In this work, the
least squares method (LSM) is used to estimate the parameters. Chen et al. [12] use the method of moments to
estimate the parameters and develop a numerical algorithm to solve them. The paper also presents experimental
results demonstrating the effectiveness of the proposed method on real datasets. Stolfi et al. [13] developed
numerical tools to accurately calculate the steady state infection probability and influential thresholds,
providing an estimated basis for the dissemination strategy. In research [14]–[17], to estimate the parameters
that determine the model, the authors propose the least squares method with second-order centering. The article
also discusses the problems and future directions of research in this area. Authors use simulations to test their
model and compare it to other models.
2. METHOD
The main purpose of information dissemination analysis is to illustrate the dissemination process. In the
course of the study, an epidemic model was chosen to model the process of information dissemination [18].
Epidemic models are still used to model the dissemination of information. This is because the process of
information dissemination can be compared to an epidemic. Especially on social media. Due to the lack of distance
between agents, the speed of information dissemination is very high (provided that the information is new and of
interest), the dissemination begins with small groups and moves to larger groups until it reaches a peak and starts
to decline. The advantages of the model include its parametric simplicity, as well as transparency in its solution.
The deterministic SIR epidemic model describes how an epidemic is transmitted from one individual (agent) to
another. The process has a decay parameter. The state of an agent can be described by three types: vulnerable,
infected, and immune. The number of agents in the network can be expressed as (1),
𝑁 = 𝑆(𝑡) + 𝐼(𝑡) + 𝑅(𝑡) (1)
where 𝑆(𝑡) is the number of information-receptive agents, 𝐼(𝑡) is the number of informed agents, 𝑅(𝑡) is the
number of unreceptive agents, and 𝑁 is the total number of agents. The unresponsive state can be interpreted
as a loss of interest in the news and further unwillingness to spread it [19]. The following parameters are used
in the model: 𝛽 is the average awareness rate and 𝛾 is the constant average rate of “recovery” per unit of time.
The model can be represented as a system of (1) [20].
{
dS(t)
dt
= −βS(t)I(t)
dI(t)
dt
= βS(t)I(t) − γI(t)
dR(t)
dt
= γI(t)
(2)
3. Int J Elec & Comp Eng ISSN: 2088-8708
Assessment of the main features of the model of dissemination of information in social … (Assel Imanberdi)
6731
As method of convolutional neural networks (CNN), the ResNet152V2 method was used, which
makes it there are various methods for estimating parameters in epidemic models [21]. In the work, the states
of agents are described by real data on three current topics of the VK social network based on a detailed
analysis. To estimate the parameters in this work, the authors used a geometric approach. Using a dataset
obtained from various news channels of a social network, tangents were drawn to each graph of the function
to determine the slope, then, using a system of equations and initial data, unknown parameters are estimated,
such as the average speed of agent awareness and the average speed of “recovery”. The dataset can be
represented as follows: likes, reports, the sum of likes and reports, views, subscribed, and unsubscribed. Thus,
from the system of (3) we obtain the following formulas for finding the parameters:
{
𝛽 = −
𝑡𝑔𝛼
𝑆(𝑡)𝐼(𝑡)
=
𝑡𝑔𝛽
𝑆(𝑡)𝐼(𝑡)
γ =
tgα
I(t)
(3)
where 𝑆(𝑡) is 𝑁-views-subscribed at time 𝑡; 𝐼(𝑡)-sum of likes and reports. Information propagation models
can be implemented using various methods and approaches such as Cox-Ingersoll-Ross (CIR) models, random
walk models, and percolation models. Depending on the goals and parameters set, you can choose the
appropriate method and implement it using software tools. In this work, the construction of an information
dissemination model with given parameters is implemented in the SiminTech program [22] using functional block
programming Figure 1.
Numerical integration was performed by the 4th
-order Runge-Kutta method [23] with a fixed step of
0.001 (day). Thus, knowing the initial number of information-receptive agents, the initial number of informed
ones, and the distribution coefficients, we can model the information dissemination model. To evaluate the
main features of the model, the authors used hierarchical cluster analysis associated with the construction of
dendrograms. In this paper, we consider a hierarchical agglomerative algorithm. Before the start of clustering,
all objects are considered separate clusters (one element in each cluster), which are combined during the
implementation of the algorithm. First, a pair of nearest multidimensional elements are selected, which are
combined into a cluster; as a result, the number of clusters becomes equal to (n-1). The procedure is repeated:
either the two elements are combined again, or the element is added to the already existing nearest cluster. This
continues until all clusters are united, that is until a single cluster containing all elements is obtained. At any
stage, the association can be interrupted by obtaining the desired number of clusters. As a result of successful
analysis and integration, our study revealed clusters (branches) on three topical topics.
Figure 1. Functional block representation of the model
3. RESULT AND DISCUSSION
3.1. Data analysis for plant disease classification
In this paper, the social network “VK” is considered, as it is the most frequently visited and largest
site on the Kazakhstan Internet. As the research topics of the communities, current news related to politics,
4. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 6, December 2023: 6729-6736
6732
news related to information technology, as well as current news from the field of travel were selected. The
study period was two calendar weeks since this period is the minimum possible for the full registration of the
outflow and growth of subscribers. For each day, the average parameters of the model were obtained, such as the
number of likes, reposts, views, and the number of subscribed and unsubscribed agents. The data required for the
parameters described above were collected and adapted to the dissemination model. The data is systematized in
Excel tables, as it is the most convenient software for such operations among those that do not require special
study, besides, data from such tables is much easier to use in other programs. The practical implementation of the
information dissemination model is implemented in the SiminTech programs for modeling the process of
information dissemination and Statistica Soft [24] for assessing the main features of the model. Based on the data
obtained from a publication related to information technology, using a geometric approach, having an initial
number of agents susceptible to information, and an initial number of informed ones, we modeled an information
dissemination model and obtained the main parameters of the model Figure 2.
Figure 2. Information dissemination modeling
However, there are some discrepancies between the simulation results and the real social network
data. This is due to the insufficiency of the number of model parameters necessary for a complete description
of the processes. The study of the processes of dissemination of information in social networks is an important
task in the modern information society. Such a study makes it possible to identify the patterns and principles
that guide users when distributing information in social networks. To conduct such studies, network analysis
methods, statistical methods, and machine learning are usually used [25]. One of the statistical methods is the
hierarchical tree. The Ward method was used, where the distance between clusters is equal to the sum of
squared distances between objects and the center of the cluster Figure 3.
In Table 1 shows the meanings of selected topics discussed in social networks, where they are shown
between groups (between CC) and within groups (within CC) [26]. When analyzing the variance, the 3 topics
considered for the model parameters were selected taking into account the large distance between classes and
the small distance between features within the class. The results of the analysis of variance for the three classes
show a good quality of classification: the significance of the level is less than 5% everywhere.
Potential applications of model parameterization, including more effective development of marketing
and advertising strategies in social networks, as well as to analyze the impact of information on public opinion
and decision-making. Determining the main parameters of the information dissemination model can also be
useful for developing more accurate and efficient algorithms for detecting and combating fake news in social
networks. Evaluation of the main features of the information dissemination model helps to determine the most
effective methods of communication and improve its dissemination.
5. Int J Elec & Comp Eng ISSN: 2088-8708
Assessment of the main features of the model of dissemination of information in social … (Assel Imanberdi)
6733
Figure 3. Dendrogram of clusters obtained using 3 hot topics in social networks: Ward’s method,
Euclidean distance
Table 1. Analysis of the variance of the topics covered
Variable Analysis of variance
Between df Within df F significance
Turkey 0.003324 2 0.000003 3 1430,932 0.000034
Ukraine 0.004849 2 0.000013 3 542,301 0.000145
IT1 0.007686 2 0.000026 3 437,402 0.000200
IT2 0.009063 2 0.000040 3 343,447 0.000287
Travel1 0.025258 2 0.000093 3 405,541 0.000224
Travel2 0.038678 2 0.000141 3 412,478 0.000218
4. CONCLUSION
In this article, we considered the classic SIR epidemic model and adapted it to the problem of
disseminating information in social networks by introducing parameters, β, and γ, representing the rate of agent
awareness and the rate of “recovery”, respectively. The collection and systematization of data was carried out
and the factors that influence the dissemination of information were formulated. Using a geometric approach,
the main parameters of the model were determined. Based on the results obtained in the work, we can conclude
the possibility of applying the classical epidemic model to the problem of disseminating information in social
networks. However, there are some discrepancies between the simulation results and real data, this is due to
the insufficient number of model parameters necessary for a full description of the processes. Further, using a
hierarchical classifier, Statistica Soft evaluated the possibility of applying the epidemic model to the problem
of information dissemination.
SIR models provide insight into the coverage and quantitative distribution of information (how many
agents received the information in total) but do not provide insight into the distribution channels of information.
This model is well suited for the preliminary calculation of the coverage of network agents. In the future, using
the model, it is planned to investigate the parameters that affect the reach of the social network audience. For
example, the time of publication, the use of virtual marketing to different communities. Even though there are
several works, research in the field of information dissemination is relevant and needs to be improved in this
area.
REFERENCES
[1] H. T. Tu, T. T. Phan, and K. P. Nguyen, “Modeling information diffusion in social networks with ordinary linear differential
equations,” Information Sciences, vol. 593, pp. 614–636, May 2022, doi: 10.1016/j.ins.2022.01.063.
[2] Z. Qiang, E. L. Pasiliao, and Q. P. Zheng, “Model-based learning of information diffusion in social media networks,” Applied
Network Science, vol. 4, no. 1, Dec. 2019, doi: 10.1007/s41109-019-0215-3.
[3] X. Zhou, B. Wu, and Q. Jin, “User role identification based on social behavior and networking analysis for information
dissemination,” Future Generation Computer Systems, vol. 96, pp. 639–648, Jul. 2019, doi: 10.1016/j.future.2017.04.043.
6. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 6, December 2023: 6729-6736
6734
[4] D. M. Romero, B. Uzzi, and J. Kleinberg, “Social networks under stress: Specialized team roles and their communication structure,”
ACM Transactions on the Web, vol. 13, no. 1, pp. 1–24, Feb. 2019, doi: 10.1145/3295460.
[5] H. Al-Dmour, R. Masa’deh, A. Salman, M. Abuhashesh, and R. Al-Dmour, “Influence of social media platforms on public health
protection against the COVID-19 pandemic via the mediating effects of public health awareness and behavioral changes: Integrated
model,” Journal of Medical Internet Research, vol. 22, no. 8, Aug. 2020, doi: 10.2196/19996.
[6] H. Chen, Y. Song, and D. Liu, “Research on cellular automata network public opinion transmission model based on combustion
theory,” Journal of Physics: Conference Series, vol. 1544, no. 1, May 2020, doi: 10.1088/1742-6596/1544/1/012131.
[7] S. Paul, A. Mahata, S. Mukherjee, P. C. Mali, and B. Roy, “Dynamical behavior of a fractional order SIR model with stability
analysis,” Results in Control and Optimization, vol. 10, Mar. 2023, doi: 10.1016/j.rico.2023.100212.
[8] M. Eriksson Krutrök and S. Lindgren, “Social media amplification loops and false alarms: Towards a sociotechnical understanding
of misinformation during emergencies,” The Communication Review, vol. 25, no. 2, pp. 81–95, Apr. 2022, doi:
10.1080/10714421.2022.2035165.
[9] D. He and X. Liu, “Novel competitive information propagation macro mathematical model in online social network,” Journal of
Computational Science, vol. 41, Mar. 2020, doi: 10.1016/j.jocs.2020.101089.
[10] J. Zhang and J. M. F. Moura, “Diffusion in social networks as SIS epidemics: Beyond full mixing and complete graphs,” IEEE
Journal of Selected Topics in Signal Processing, vol. 8, no. 4, pp. 537–551, Aug. 2014, doi: 10.1109/JSTSP.2014.2314858.
[11] P. Wang, H. Liu, X. Zheng, and R. Ma, “A new method for spatio-temporal transmission prediction of COVID-19,” Chaos, Solitons
& Fractals, vol. 167, Feb. 2023, doi: 10.1016/j.chaos.2022.112996.
[12] X. Chen, J. Li, C. Xiao, and P. Yang, “Numerical solution and parameter estimation for uncertain SIR model with application
to COVID-19,” Fuzzy Optimization and Decision Making, vol. 20, no. 2, pp. 189–208, Jun. 2021, doi: 10.1007/s10700-020-
09342-9.
[13] P. Stolfi, D. Vergni, R. Oldenkamp, C. Schultsz, E. Mancini, and F. Castiglione, “An agent-based multi-level model to study the
spread of antimicrobial-resistant gonorrhoea,” in 2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM),
Dec. 2022, pp. 803–808, doi: 10.1109/BIBM55620.2022.9994926.
[14] D. A. Tomchin and A. L. Fradkov, “Prediction of the COVID-19 spread in Russia based on SIR and SEIR models of epidemics,”
IFAC-PapersOnLine, vol. 53, no. 5, pp. 833–838, 2020, doi: 10.1016/j.ifacol.2021.04.209.
[15] J. Gu, Y. Shen, and B. Zhou, “Image processing using multi-code GAN prior,” in 2020 IEEE/CVF Conference on Computer Vision
and Pattern Recognition (CVPR), Jun. 2020, vol. 53, no. 5, pp. 3009–3018, doi: 10.1109/CVPR42600.2020.00308.
[16] R. Bhardwaj and A. Agrawal, “Analysis of second wave of COVID-19 in different countries,” Transactions of the Indian National
Academy of Engineering, vol. 6, no. 3, pp. 869–875, Sep. 2021, doi: 10.1007/s41403-021-00248-5.
[17] A. H. Amiri Mehra, M. Shafieirad, Z. Abbasi, and I. Zamani, “Parameter estimation and prediction of COVID-19 epidemic turning
point and ending time of a case study on SIR/SQAIR epidemic models,” Computational and Mathematical Methods in Medicine,
vol. 2020, pp. 1–13, Dec. 2020, doi: 10.1155/2020/1465923.
[18] S. Serikbayeva, J. A. Tussupov, M. A. Sambetbayeva, A. S. Yerimbetova, G. B. Borankulova, and A. T. Tungatarova, “A model of
a distributed information system based on the Z39. 50 protocol,” International Journal of Communication Networks and Information
Security (IJCNIS), vol. 13, no. 3, pp. 511–518, Apr. 2022, doi: 10.17762/ijcnis.v13i3.5122.
[19] M. J. Lazo and A. De Cezaro, “Why can we observe a plateau even in an out of control epidemic outbreak? A SEIR model with the
interaction of n distinct populations for COVID-19 in Brazil,” Trends in Computational and Applied Mathematics, vol. 22, no. 1,
pp. 109–123, Mar. 2021, doi: 10.5540/tcam.2021.022.01.00109.
[20] Z. Chladná, J. Kopfová, D. Rachinskii, and S. C. Rouf, “Global dynamics of SIR model with switched transmission rate,” Journal
of Mathematical Biology, vol. 80, no. 4, pp. 1209–1233, Mar. 2020, doi: 10.1007/s00285-019-01460-2.
[21] J. Woo and H. Chen, “Epidemic model for information diffusion in web forums: experiments in marketing exchange and political
dialog,” SpringerPlus, vol. 5, no. 1, Dec. 2016, doi: 10.1186/s40064-016-1675-x.
[22] B. Wang, J. Zhang, H. Guo, Y. Zhang, and X. Qiao, “Model study of information dissemination in microblog community networks,”
Discrete Dynamics in Nature and Society, vol. 2016, pp. 1–11, 2016, doi: 10.1155/2016/8393016.
[23] G. Jignesh Chowdary, N. S. Punn, S. K. Sonbhadra, and S. Agarwal, “Face mask detection using transfer learning of inceptionv3,”
in BDA 2020: Big Data Analytics, 2020, pp. 81–90, doi: 10.1007/978-3-030-66665-1_6.
[24] S. Degadwala, D. Vyas, H. Biswas, U. Chakraborty, and S. Saha, “Image captioning using inception V3 transfer learning model,”
in 2021 6th
International Conference on Communication and Electronics Systems (ICCES), Jul. 2021, pp. 1103–1108, doi:
10.1109/ICCES51350.2021.9489111.
[25] G. Taubayev et al., “Machine learning algorithms and classification of textures,” Journal of Theoretical and Applied Information
Technology, vol. 98, no. 23, pp. 3854–3866, 2020.
[26] M. Yessenova et al., “The effectiveness of methods and algorithms for detecting and isolating factors that negatively affect the
growth of crops,” International Journal of Electrical and Computer Engineering (IJECE), vol. 13, no. 2, pp. 1669–1679, Apr. 2023,
doi: 10.11591/ijece.v13i2.pp1669-1679.
BIOGRAPHIES OF AUTHORS
Assel Imanberdi in 2016 she graduated from the Eurasian National University
named after L.N. Gumilev with a degree in Information Systems. In 2018, she received a
master’s degree in the specialty Information Systems. She began her career in 2018 as a
specialist in the Joint Stock Company National Information Technologies. Currently, he is a
doctoral student at the Department of Information Systems of the Eurasian National University
named after L.N. Gumilev. She is a beginner researcher, and her scientific interests include data
analysis, machine learning, image processing, mathematical and computer modeling. She can
be contacted by email: asel_khas@list.ru.
7. Int J Elec & Comp Eng ISSN: 2088-8708
Assessment of the main features of the model of dissemination of information in social … (Assel Imanberdi)
6735
La Lira in 1984 she graduated from the Kazakh State University named after S.M.
Kirov with a degree in Mathematics. In 1998 she defended her thesis in the specialty “05.13.16-
Application of computer technology, mathematical modeling and mathematical methods in
scientific research” La L.L. is associate professor of the Department of Information Systems in
Eurasian National University named after L.N. Gumilev. She is the author of more than 50
scientific papers, including 8 articles in the Scopus database. Scientific interests-artificial
intelligence, data mining, fuzzy systems. She can be contacted at email: lira_la@hotmail.com.
Kulmuratova Aitolkyn in 2016, she graduated from Karaganda State Technical
University with a bachelor’s degree in Automation and Control. In 2018, she graduated from
Karaganda State Technical University with a master’s degree. During her studies, she worked
as an engineer at the university and participated in a project to develop a subsystem designed to
transmit telemetry data. She began her career as a teacher in 2021 at the Department of Applied
Mathematics and Informatics at Karaganda Buketov University, and currently teaches at the
Department of Intelligent Systems and Cybersecurity at Astana IT University. She is a beginner
researcher, and her scientific interests include computer science, machine learning, RF
electronics, and cybersecurity. She can be contacted at email: ait.sovet@gmail.com.
Rzayeva Leila received her B.S, M.S., and Ph.D. from L.N. Gumilyov Eurasian
National University, Astana, Kazakhstan, in 2015. She works as an Assistant Professor and
Researcher at Astana IT University, Department of Intelligent Systems and Cybersecurity
(Nur-Sultan, Kazakhstan). She is having a total teaching experience of more than 10 years.
Leila Rzayeva has published more than 30 national/international research articles. Her
interests are control systems and industrial automation, robust control system, machine
learning (ML), deep learning (DL) and design of control information systems, as well as the
design of neural networks and artificial intelligent systems. She can be contacted at email:
l.rzayeva@astanait.edu.kz.
Gulnara Abitova received her M.S. degree in Cybernetics in 1988 from Moscow
Institute of Allows and Steel at Moscow, Russia, and her Ph.D. in Automation and Control
in 2013 from the State University of New York (SUNY) at Binghamton and L.N. Eurasian
National University, Kazakhstan. She graduated from the Postdoctoral Program in Control
Systems in 2012 from Binghamton University, USA. Dr. Abitova worked as a Visiting
Professor and Researcher at the Department of Electrical and Computer Engineering at
Binghamton University, USA, in 2010-2012. In 2017, she was an Invited Professor at the
Savonia University of Applied Sciences and Technology in Savonia, Finland. She published
more than 100 research articles, 6 monographs and books, and 3 theses. Her current research
interest includes control systems and industrial automation, simulation and modeling, neural
networks technology, artificial intelligent and cyber security. She can be contacted at email:
abitova.gul@gmail.com.
Bakiyeva Aigerim in 2010 she graduated from the Eurasian National University
named after L.N. Gumilev with a degree bachelor in Informatica. In 2019 she defended her
dissertation in the specialty 05.13.17-Theoretical informatics» and “6D075100-Informatics,
computer engineering and management” and received a candidate of technical sciences and
Ph.D. She began her career in 2010 as an assistant teacher at the Department of Social Sciences
and Humanities of Kazakh National University of Arts. Currently, she is a Senior Lecturer at
the Department of Information Systems of Eurasian National University named after L.N.
Gumilev. She is the author of more than 35 scientific papers, including 2 monographs, 5 articles
in the Scopus database. Scientific interests-information systems, data mining, natural language
processing. She can be contacted at email: m_aigerim0707@mail.ru.
8. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 6, December 2023: 6729-6736
6736
Orynbayeva Ainur graduated from Abay Almaty State University in 2000 with a
degree in physics and computer science. In 2015, she graduated from the Kazakh University of
economics, finance and international trade with a degree in Information Systems. In 2021, she
studied at the L. N. Gumilyov Eurasian National University in the specialty 8D01511-Computer
Science. In 2001, she worked as a teacher at the Department of Computer Science, Mathematics
and biophysics at the Kazakh National Medical University named after Asfendiyarov. Since
2008, she has been working as a senior lecturer at Astana Medical University. She is the author
of more than 30 scientific papers and 1 article in the Scopus database. She can be contacted at
email: ainur_tas@mail.ru.
Baimakhanbetova Assem graduated from the Kyrgyz State University named after
Ishenaly Arabayev in Bishkek in 2005 with a degree in Mathematics and computer science, was
awarded the qualification degree “teacher”. Since 2007, he has been working as a senior lecturer
in “Astana Medical University”. You can contact her by the email: assemaktore@gmail.com.