This document presents research on using machine learning models to predict resource usage for web applications in cloud computing environments. It aims to develop a prediction model that can forecast future resource needs to enable timely provisioning of virtual machines. The researchers evaluate support vector regression (SVR), neural networks, and linear regression using workload data from the TPC-W benchmark. The results show that SVR achieved more accurate predictions of CPU utilization, throughput, and response time compared to the other models, with errors reductions of up to 80%. This suggests SVR may be best for predicting resource usage in non-linear systems like multi-tier web applications.
Learning from Computer Simulation to Tackle Real-World ProblemsNAVER Engineering
Deep learning has made great strides for problems that can be learned with direct supervision, in which a large dataset with high-quality annotation is provided (e.g., ImageNet). However, collecting such a large dataset is expensive and time-consuming. In this talk, I will discuss our recent works on learning from computer simulation to tackle real-world problems. I will first present our novel Simulated+Unsupervised (S+U) domain adaptation algorithm, which fully leverages the flexibility of data simulators and bidirectional mappings between synthetic and real data. We show that our approach achieves the improved performance on the gaze estimation task, outperforming the prior approach by Shrivastava et al. Next, I will introduce our work on building data-driven vehicle collision prediction algorithms. Today’s vehicle collision prediction algorithms are rule-based and have not benefited from the recent developments in deep learning. This is because it is almost impossible to collect a large amount of collision data from the real world. To address this challenge, we collect a large accident dataset using a popular video game named GTA and train end-to-end classification algorithms which identify dangerous objects from a given image. Furthermore, we devise a novel domain adaptation method with which we can further improve the performance of our algorithm in the real-world.
MEME – An Integrated Tool For Advanced Computational ExperimentsGIScRG
The document describes MEME, an integrated tool for advanced computational experiments. MEME allows users to efficiently explore model responses through parameter sweeps and design of experiments. It supports running simulations in parallel on local clusters and grids. MEME collects, analyzes, and visualizes results. It implements intelligent "IntelliSweep" methods like iterative uniform interpolation and genetic algorithms to refine parameter space exploration.
This document summarizes a research project that aims to develop an application to predict airline ticket prices using machine learning techniques. The researchers collected over 10,000 records of flight data including features like source, destination, date, time, number of stops, and price. They preprocessed the data, selected important features, and applied machine learning algorithms like linear regression, decision trees, and random forests to build predictive models. The random forest model provided the most accurate predictions according to performance metrics like MAE, MSE, and RMSE. The researchers propose deploying the best model in a web application using Flask for the backend and Bootstrap for the frontend so users can input flight details and receive predicted price outputs.
This document discusses input data collection and analysis for simulation methods. It describes various ways to collect input data, including historical records, manufacturer specifications, and direct observation. There are two classifications of data: deterministic/probabilistic and discrete/continuous. Common distributions for input data are described, such as Bernoulli, uniform, exponential, and triangle distributions. Methods for analyzing input data include graphical analysis, chi-square tests, and Kolmogorov-Smirnov tests. Software tools like Arena Input Analyzer and Expert Fit can automate the process of fitting distributions to data sets.
This document discusses using anomaly detection and visual analytics to improve smart product performance by identifying abnormal sensor or software events. It presents a case study using unsupervised auto-encoder models in Keras and TensorFlow to detect anomalies in drone event log data. Specifically, it finds controller signal loss events and then uses visual analysis of flight paths in Python, Tableau and Plotly to determine where and why errors occurred, such as from environmental obstructions. The goal is to resolve issues and improve product performance by understanding anomalous events.
Comparative Study of Pre-Trained Neural Network Models in Detection of GlaucomaIRJET Journal
The document presents a comparative study of various pre-trained neural network models for the early detection of glaucoma from fundus images. Six pre-trained models - Inception, Xception, ResNet50, MobileNetV3, DenseNet121 and DenseNet169 - were analyzed based on their accuracy, loss graphs, confusion matrices and performance metrics like precision, recall, F1 score and specificity. The DenseNet169 model achieved the best results among the models based on these evaluation parameters.
Machine learning and deep learning techniques can be used to analyze diverse types of data such as images, text, signals and more. Deep learning uses neural networks to learn directly from raw data, enabling applications like object recognition, speech recognition, and analyzing time series signals. Deep learning has become popular due to labeled public datasets, increased GPU acceleration, and pre-trained models that provide a starting point for new problems.
EFFICIENT USE OF HYBRID ADAPTIVE NEURO-FUZZY INFERENCE SYSTEM COMBINED WITH N...csandit
This research study proposes a novel method for automatic fault prediction from foundry data
introducing the so-called Meta Prediction Function (MPF). Kernel Principal Component
Analysis (KPCA) is used for dimension reduction. Different algorithms are used for building the
MPF such as Multiple Linear Regression (MLR), Adaptive Neuro Fuzzy Inference System
(ANFIS), Support Vector Machine (SVM) and Neural Network (NN). We used classical
machine learning methods such as ANFIS, SVM and NN for comparison with our proposed
MPF. Our empirical results show that the MPF consistently outperform the classical methods.
Reliability is concerned with decreasing faults and their impact. The earlier the faults are detected the better. That's why this presentation talks about automated techniques using machine learning to detect faults as early as possible.
The document proposes four methods for improving object detection performance by combining different types of information. The first method uses common fate Hough transform to combine motion and appearance information. The second detects emergency indicators by fusing motion and appearance features. The third utilizes mutual information between image features using pyramid match score. The fourth method aims to detect objects with in-plane rotations by analyzing votes from different keypoints. The methods are evaluated on various datasets and aim to better utilize additional information for more accurate detection.
One-Sample Face Recognition Using HMM Model of Fiducial AreasCSCJournals
In most real world applications, multiple image samples of individuals are not easy to collate for direct implementation of recognition or verification systems. Therefore there is a need to perform these tasks even if only one training sample per person is available. This paper describes an effective algorithm for recognition and verification with one sample image per class. It uses two dimensional discrete wavelet transform (2D DWT) to extract features from images and hidden Markov model (HMM) was used for training, recognition and classification. It was tested with a subset of the AT&T database and up to 90% correct classification (Hit) and false acceptance rate (FAR) of 0.02% was achieved.
2013: Prototype-based learning and adaptive distances for classificationUniversity of Groningen
1) The document discusses prototype-based learning and distance-based classification methods. It focuses on Learning Vector Quantization (LVQ) and its application to classify adrenal tumors using urinary steroid profiles.
2) LVQ works by defining prototypes for each class and updating them during training to better discriminate classes based on a distance measure like Euclidean distance. It was applied to classify 102 benign and 45 malignant adrenal tumor samples based on levels of 32 steroid markers.
3) Generalized LVQ achieved good classification performance on a test set for this problem, identifying representative prototypes for the two tumor classes based on their differing steroid excretion profiles.
Visual diagnostics for more effective machine learningBenjamin Bengfort
The model selection process is a search for the best combination of features, algorithm, and hyperparameters that maximize F1, R2, or silhouette scores after cross-validation. This view of machine learning often leads us toward automated processes such as grid searches and random walks. Although this approach allows us to try many combinations, we are often left wondering if we have actually succeeded.
By enhancing model selection with visual diagnostics, data scientists can inject human guidance to steer the search process. Visualizing feature transformations, algorithmic behavior, cross-validation methods, and model performance allows us a peek into the high dimensional realm that our models operate. As we continue to tune our models, trying to minimize both bias and variance, these glimpses allow us to be more strategic in our choices. The result is more effective modeling, speedier results, and greater understanding of underlying processes.
Visualization is an integral part of the data science workflow, but visual diagnostics are directly tied to machine learning transformers and models. The Yellowbrick library extends the scikit-learn API providing a Visualizer object, an estimator that learns from data and produces a visualization as a result. In this talk, we will explore feature visualizers, visualizers for classification, clustering, and regression, as well as model analysis visualizers. We'll work through several examples and show how visual diagnostics steer model selection, making machine learning more effective.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/10/person-re-identification-and-tracking-at-the-edge-challenges-and-techniques-a-presentation-from-the-university-of-auckland/
Morteza Biglari-Abhari, Senior Lecturer at the University of Auckland, presents the “Person Re-Identification and Tracking at the Edge: Challenges and Techniques” tutorial at the May 2021 Embedded Vision Summit.
Numerous video analytics applications require understanding how people are moving through a space, including the ability to recognize when the same person has moved outside of the camera’s view and then back into the camera’s view, or when a person has passed from the view of one camera to the view of another. This capability is referred to as person re-identification and tracking. It’s an essential technique for applications such as surveillance for security, health and safety monitoring in healthcare and industrial facilities, intelligent transportation systems and smart cities. It can also assist in gathering business intelligence such as monitoring customer behavior in shopping environments. Person re-identification is challenging.
In this talk, Biglari-Abhari discusses the key challenges and current approaches for person re-identification and tracking, as well as his initial work on multi-camera systems and techniques to improve accuracy, especially fusing appearance and spatio-temporal models. He also briefly discusses privacy-preserving techniques, which are critical for some applications, as well as challenges for real-time processing at the edge.
The document reviews various techniques for lie detection beyond polygraphs. It discusses 6 techniques: 1) An ELM and SLFN machine learning approach using EOG and EEG signals achieves 97% accuracy. 2) A multimodal fusion network using text, audio, and video achieves 92-96% accuracy. 3) A system using EEG and DWT features with SVM achieves 83% accuracy. 4) A system using facial features and audio with DNN achieves 98.45% accuracy. 5) A system using DIFCW radar and machine learning on respiratory and heartbeat signals achieves 61.5-63.2% accuracy. 6) A multimodal system using EEG, audio, and video with Bi-LSTM achie
The document reviews various techniques for lie detection beyond polygraphs. It discusses 6 lie detection techniques:
1. An ELM and SLFN machine learning approach using EOG and EEG signals achieves 97% accuracy.
2. A multimodal fusion network combining text, audio, and video achieves 92-96% accuracy.
3. An EEG-based system using DWT and SVM achieves 83% accuracy.
4. A system using facial features and audio features analyzed with a DNN achieves 98.45% accuracy.
5. A system using DIFCW radars and machine learning on respiratory and heartbeat signals achieves 61.5-63.2% accuracy.
NSL KDD Cup 99 dataset Anomaly Detection using Machine Learning Technique Sujeet Suryawanshi
This document summarizes a presentation given on using decision trees and machine learning techniques for anomaly detection on the NSL KDD Cup 99 dataset. It discusses anomaly detection, machine learning, different machine learning algorithms like decision trees, SVM, Naive Bayes etc. and their application for intrusion detection. It then describes an experiment conducted using the decision tree algorithm on the NSL KDD Cup 99 dataset to classify network traffic as normal or anomalous. The results showed the decision tree model achieved over 98% accuracy on both the full dataset and a reduced feature set.
Integrated Hidden Markov Model and Kalman Filter for Online Object Trackingijsrd.com
Visual prior from generic real-world images study to represent that objects in a scene. The existing work presented online tracking algorithm to transfers visual prior learned offline for online object tracking. To learn complete dictionary to represent visual prior with collection of real world images. Prior knowledge of objects is generic and training image set does not contain any observation of target object. Transfer learned visual prior to construct object representation using Sparse coding and Multiscale max pooling. Linear classifier is learned online to distinguish target from background and also to identify target and background appearance variations over time. Tracking is carried out within Bayesian inference framework and learned classifier is used to construct observation model. Particle filter is used to estimate the tracking result sequentially however, unable to work efficiently in noisy scenes. Time sift variance were not appropriated to track target object with observer value to prior information of object structure. Proposal HMM based kalman filter to improve online target tracking in noisy sequential image frames. The covariance vector is measured to identify noisy scenes. Discrete time steps are evaluated for identifying target object with background separation. Experiment conducted on challenging sequences of scene. To evaluate the performance of object tracking algorithm in terms of tracking success rate, Centre location error, Number of scenes, Learning object sizes, and Latency for tracking.
Similar to HiPEAC2022_António Casimiro presentation (20)
IoT Tech Expo 2023_Micha vor dem Berge presentationVEDLIoT Project
VEDLIoT Next Generation AIoT Applications. Micha vor dem Berge. VEDLIoT Conference Track co-located with IoT Tech Expo, Amsterdam, Netherlands, September 2023
Next generation accelerated AIoT systems and applications. Pedro Trancoso. Special Session on EU Projects, co-located with Computing Frontiers 2023, Bologna, Italy, May 2023
The document outlines an agenda for a presentation on the VEDLIoT project. The agenda includes an introduction to VEDLIoT by Pedro Trancoso, a presentation on VEDLIoT Hardware Platforms by Kevin Mika, and a discussion of Performance Evaluation and Benchmarking in VEDLIoT by Mario Pormann. The VEDLIoT project aims to develop very efficient deep learning techniques for IoT applications through the use of heterogeneous hardware platforms and accelerators.
IoT Week 2022-NGIoT session_Micha vor dem Berge presentationVEDLIoT Project
This document discusses optimizing a smart home system using edge computing and machine learning. It describes using embedded accelerators like the Nvidia Jetson AGX and Xavier to distribute neural networks and machine learning models to devices around the home. These include a smart mirror, kitchen, door, and other devices. The goal is to optimize the models to increase energy efficiency and distribute the workloads across the edge devices. One focus is developing a smart mirror prototype that can recognize faces, objects and gestures using embedded accelerators like the t.RECS and u.RECS boards to analyze camera input and interact with users through voice and a virtual display.
Next Generation IoT Architectures_Hans SalomonssonVEDLIoT Project
VEDLIoT Toolchain for Efficient Deep Learning on heterogeneous hardware, Hans Salomonsson, EU-IoT Training Workshops Series – "Next Generation IoT Architectures”, November 2021
The document discusses hardware platforms and accelerators for VEDLIoT. It describes the VEDLIoT Hardware Platform as a heterogeneous, modular, and scalable microserver system that supports the IoT spectrum from embedded to edge to cloud. It then provides details on several platforms: the RECS|Box platform which uses Computer-on-Module standards to achieve flexibility and performance; the t.RECS platform optimized for local edge applications; and the uRECS embedded device platform that supports machine learning acceleration and communication interfaces. Diagrams and specifications are given for the architectures of these platforms.
VEDLIoT Cognitive IoT Hardware Platform. René Griessl. Workshop on Deep Learning for IoT (DL4IoT), co-located with HiPEAC 2022, Budapest, Hungary, June 2022
SS-CPSIoT 2023_Kevin Mika and Piotr Zierhoffer presentationVEDLIoT Project
VEDLIoT – Accelerated AIoT. Kevin Mika and Piotr Zierhoffer. CPS&IoT’2023 Summer School on Cyber-Physical Systems and Internet-of-Things, Budva, Montenegro, June 2023
VEDLIOT – Accelerated AIoT. Jens Hagemeyer. 2nd Workshop on Deep Learning for IoT (DL4IoT), co-located with HiPEAC 2023, Toulouse, France, January 2023
VEDLIoT – A heterogeneous hardware platform for next-gen AIoT applications, Jens Hagemeyer, EU-IoT Training Session on “Machine Learning at the Edge and the FarEdge”, IoT Week (online event), August 2021
Security for VEDLIoT Components, from Cloud through Edge to IoT. Marcelo Pasin. Workshop on Deep Learning for IoT (DL4IoT), co-located with HiPEAC 2022, Budapest, Hungary, June 2022
Security and Robustness for VEDLIoT Components, from Cloud through Edge. Marcelo Pasin. VEDLIoT Conference Track co-located with IoT Tech Expo, Amsterdam, Netherlands, September 2023
Reconfigurable ML Accelerators in VEDLIoT. Marco Tassemeier. Workshop on Deep Learning for IoT (DL4IoT), co-located with HiPEAC 2022, Budapest, Hungary, June 2022
EU-IoT Training Workshops Series: AIoT and Edge Machine Learning 2021_Jens Ha...VEDLIoT Project
IoT - Accelerated Deep Learning for Cognitive Edge Computing, Jens Hagemeyer, EU-IoT Training Workshops Series – “AIoT and Edge Machine Learning”, May 2021
History & overview of Bioprocess Technology.pptxberciyalgolda1
Bioprocess technology is a field that merges biology, chemistry, and engineering to develop processes that harness living cells or their components (like enzymes) for the production of pharmaceuticals, chemicals, food, and biofuels. This multidisciplinary field has evolved significantly over the past few decades, playing a crucial role in various industries.
ALTERNATIVE ANIMAL TOXICITY STUDY .pptxSAMIR PANDA
Alternatives animal testing are development and implementation of test methods that avoid the use of live animals.
Human biochemistry, physiology, pharmacology, and endocrinology and toxicology has been derived from animal models.10-100 millions of animals are using for experimentation in a year.
Animals used experimentation distributed among zebra- fish to primates.
Vast majority of animals are sacrificed at end of research programme.The use of animals can be further subdivided according to the degree of suffering
Minor animal suffering:- observing animals in behavioral studies, single blood sampling, Immunization without adjutants, etc.
Moderate animal suffering:- repeated blood sampling, recovery from general anesthesia, etc.
Types of Garden (Mughal and Buddhist style)saloniswain225
Garden is the place where, flower blooming on a plant ,aesthetic things are present like Topiary, Hedges, Arches and many more. Whereas, Botanical garden is an educational institution for scientific research as well as gathering information about different culture. Such as, Hindu, Mughal , Buddhist style.
ScieNCE grade 08 Lesson 1 and 2 NLC.pptxJoanaBanasen1
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it................
just download it..............
The X‐Pattern Merging of the Equatorial IonizationAnomaly Crests During Geoma...Sérgio Sacani
A unique phenomenon—A geomagnetically quiet time merging of Equatorial IonizationAnomaly (EIA) crests, leading to an X‐pattern (EIA‐X) around the magnetic equator—has been observed in thenight‐time ionospheric measurements by the Global‐scale Observations of the Limb and Disk mission. Thepattern is also reproduced in an ionospheric model that assimilates slant Total Electron Content from GlobalNavigation Satellite System and Constellation Observing System for Meteorology, Ionosphere, and Climate 2.A free‐running whole atmospheric general circulation model simulation reproduces a similar pattern. Due to thesimilarity between measurements and simulations, the latter is used to diagnose this heretofore unexplainedphenomenon. The simulation shows that the EIA‐X can occur during geomagnetically quiet conditions and inthe afternoon to evening sector at a longitude where the vertical drift is downward. The downward vertical driftis a necessary but not sufficient condition. The simulation was performed under constant low‐solar andquiescent‐geomagnetic forcing conditions, therefore we conclude that EIA‐X can be driven by lower‐atmospheric forcing.
CULEX MOSQUITOES, SYSTEMATIC CLASSIFICATION, MORPHOLOGY, LIFE CYCLE , CLINICA...DhakeshworShougrakpa
showing Culex mosquitoes' systematic classification, a completed life cycle i.e. egg, larva, pupa and adult mosquitoes also known as imago, also this slide showed the morphology of culex mosquitoes including head, thorax, abdomen, wing, egg larval stage, resting position,etc. by comparing with anopheles' mosquitoes. it's also showed the transmission of wuchereria bancrofti transmitted by vector Culex quinquefasciatus. Host: W. bancrofti completes its life cycle in
two hosts.
1. Definitive host: Man
2. Intermediate host: Mosquito named
Culex quinquefasciatus is the principle
vector worldwide. Rarely Anopheles
(rural Africa) or Aedes (Pacific Island)
can serve as a vector.
Infective form: Third stage filariform larvae
are the infective form found in the proboscis
of the mosquito.
Mode of transmission: L3
filariform larvae get
deposited in skin by the insect bite. Residents living in the endemic areas are exposed to
about 50–300 L3
larvae every year.Human cycle
z Develop into adults: Larvae penetrate
the skin, enter into lymphatic vessels and
migrate to the local lymph nodes where they
molt twice to develop into adult worms in
few months (4–6 weeks for B. malayi)
z Adults lay L1
larvae (microfilariae): Adult
worms reside in the afferent lymphatics or
cortical sinuses of the lymph nodes where
they mate and start laying the first stage
larvae (microfilariae). Male worms die after
mating where as the female worms live for
5–10 years. A gravid female can discharge
50,000 microfilariae/day
z Prepatent period: It is the time period
between the infection (entry of L3
larvae)
and diagnosis (detection of microfilariae
in blood). This is variable ranging from 80
days to 150 days
Mosquito cycle
z Transmission: When the mosquito bites
an infected man, the microfilariae are
ingested. Culex bites in night where as Aedes
bites in daytime
z Exsheathing: Microfilariae come out of the
sheath within 1–2 hours of ingestion
z Migration to thoracic muscle: L1
larvae
penetrate the stomach wall and migrate to
thoracic muscle in 6–12 hours where they
become sausage shaped (short and thick)
z Develop to infective L3
larvae: L1
larvae
molt twice to develop L2
(long and thick
form) followed by L3
(long and thin form).
The highly active L3
larvae migrate to the
labella (distal part of proboscis) of the
mosquito and serve as the infective stage
to man
z Extrinsic incubation period: Under
optimum conditions, the mosquito cycle
takes around 10–14 days
Clinical symptoms:
The clinical symptoms and signs are mainly determined by the duration of the infection. The
adult worms, which live in the lymphatic vessels, can cause severe inflammation of the
lymphatic system and acute recurrent fever. Secondary bacterial infections are a major factor in
the progression towards lymphoedema and elephantiasis, the characteristic swelling of the limbs,
genitalia and breasts.
treatment like using larvicide like fenthion can spray on water
Detecting and translating language ambiguity with multilingual LLMsBehrang Mehrparvar
Language is one of the most important landmarks in humans in history. However, most languages could be ambiguous, which means the same conveyed text or speech, results in different actions by different readers or listeners. In this project we propose a method to detect the ambiguity of a sentence using translation by multilingual LLMs. In this context, we hypothesize that a good machine translator should preserve the ambiguity of sentences in all target languages. Therefore, we investigate whether ambiguity is encoded in the hidden representation of a translation model or, instead, if only a single meaning is encoded. The potential applications of the proposed approach span i) detecting ambiguous sentences, ii) fine-tuning existing multilingual LLMs to preserve ambiguous information, and iii) developing AI systems that can generate ambiguity-free languages when needed.
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
just download it to see!
Prototype Implementation of Non-Volatile Memory Support for RISC-V Keystone E...LenaYu2
Handling confidential information has become an increasingly important concern among many areas of society. However, current computing environments have been still vulnerable to various threats, and we should think they are untrusted.
Trusted Execution Environments (TEEs) have attracted attention because they can execute a program in a trusted environment constructed on an untrusted platform.
Particularly, the RISC-V Keystone is one of the interesting TEEs since it is a flexibly customizable and fully open-source platform. On the other hand, as same as other TEEs, it must also delegate I/O processing, such as file accesses, to a host OS, resulting in the expensive overhead. For this problem, we thought utilizing byte-addressable non-volatile memory (NVM) modules is a useful solution to handle persistent data objects for TEEs.
In this paper, we introduce a prototype implementation of NVM support for the Keystone. Additionally, we evaluate it on the Freedom U500 built on a VC707 FPGA dev kit.
https://ken.ieice.org/ken/paper/20210720TC4K/
This an presentation about electrostatic force. This topic is from class 8 Force and Pressure lesson from ncert . I think this might be helpful for you. In this presentation there are 4 content they are Introduction, types, examples and demonstration. The demonstration should be done by yourself
Founders Of Modern Science 16th Century to the 21st Century.pdf
HiPEAC2022_António Casimiro presentation
1. ML Robustness in VEDLIoT
António Casimiro
University of Lisbon
HiPEAC 2022
Budapest, 20 June 2022
2. 2
Local monitoring of input data correctness
Check characteristics of input features (general data)
Build ECDF of training data
Compare with ECDF of input data
Find outliers/drifts in data values (time series data)
Store input data (time series values)
Forecast next expected input value
Compare forecast value with received input value
Robustness and safety service
Remote monitoring of model correctness
Replicated execution using same input on trusted model
(not for time series data)
Periodically send input/output data to remote node
Run trusted model with received input data
Compare outputs of local and remote models
3. 3
Aware of a context change
Measure statistical distance between
Training data distribution
Real-world data distribution
If huge difference in distributions -> Do not trust ML
output
Domain monitoring using statistical
methods
Calculate empirical cumulative distribution
function (ECDF)
1. Ordering all unique observations in data sample
2. Calculating cumulative probability for each as number of
observations less than or equal to a given observation
divided by the total number of observations:
4. 4
In practice
Take 30 images from training set and 30 images from
real-world
Do that for a specific class images based of classifier
prediction
For each image take the first pixel in the left corner
Considered one RGB color channel at time
Calculate first ECDF on the values of the 30 pixels in the
left corner from training set
The same for real-world image -> second ECDF
Statistical distance between data
distributions
Apply statistical distance and save the value
To that for all the pixels and for the three color channel
Average distances per color channel and compare the
distances with a given threshold
Limitations
Too constrained by the color
Differences in brightness can fool the method
Images should be well aligned for proper comparison
5. 5
Specific models for different environmental conditions
Can they perform better than a single generic one?
Experiment with CNN object detector (Yolo4)
Two different conditions: daytime and night
Driving dataset with 10 classes (pedestrian, rider, car,
…)
Domain adaptation through split approach
Training three models:
Daytime model, only objects during daytime
Night model, only objects during night
Daytime and night model, both previous ones
Testing models:
On daytime images only
On night images only
Daytime + Night
Daytime Night
Daytime images: 27,967 Night images: 27,967
Daytime + night images: 55,934
Night images: 3,929
Daytime images: 3,929
Training
Testing
6. 6
Object detector predicts:
Location of object (coordinates)
Class (e.g., a dog)
Confidence score (a value from 0 to 1)
Confidence score measures the confidence on:
Localization
How likely the box contains an object
How accurate is the box -> IoU
Classification
Precision
«When it guesses how often does it guess correctly?»
Recall
«Has it guessed every time that it should have guessed?»
Performance metric
Confidence threshold
Positive detection if confidence score > threshold
Strict threshold -> less recall
Precision-recall (PR) curve
Shows trade-off precision/recall for varying threshold
mean Average precision (mAP)
Summarize such plot for all classes
mAP@0.5 at IoU threshold of 50%
7. 7
Day and night model performs slightly better in both
conditions
Trained twice images than the other two
It “saw” more objects during training
Better a single model and train as much data as
possible
Then monitor the output
Results
Other ways to improve robustness and ensure
correctness of output
Adversarial training
Uncertainty quantification
Explainability methods
Statistical distance between data distributions
Performance on mAP@0.5
Day test set Night test set
Day model 48.59%
Night model 47.82%
Day and night model 50.58% 49.90%
8. 8
Input monitoring (time series data)
Framework with offline part (model training) and online
part (error detection in input data)
Detected errors (due to sensor or communication
faults):
Omissions
Outliers
Drifts
Self
Neighbors
Self + Neighbors
Generated Models
Offline
Online
9. 9
Self
Neighbors
Self + Neighbors
Generated Models
Offline
Training phase
Multiple MLP models are trained to forecast the next
data value to be received by a target sensor:
A model using only past data from the target sensor
A model using data from the target sensor and from
neighbor sensors (if they provide correlated data)
A model using only data from neighbor sensors (if
they provide correlated data)
This approach allows to distinguish real events (with
impact on multiple sensors) from outliers (affecting
only the target sensor)
10. 10
Online
Online error detection
Sensor values go through the monitoring service
Omissions are detected using timers configured for periodic data
Sensor data (from multiple sensors) is stored and properly aligned to feed each model input
Using multiple forecasts (from running the multiple models), outliers can be detected
The service can also replace outliers for quality assurance
12. 12
Monitoring output through explainability
Provide more evidence to assess output correctness
Explainability methods are used to evaluate contribution of each input pixel to the output
Monitoring model correctness
Periodically send images to redundant remote trustworthy model for comparison of results
Safety requirements on the architectural framework
Safety is one of the clusters of concern addressed at several levels of abstration
Other ongoing work
13. 13
Conclusion
Robustness and safety are important concerns, being addressed from several perspectives
• Monitoring methods for input and output data quality
• Monitoring methods for checking model correctness
• Architecting for safety
This showed useful for the quality block as it made possible to correlate events between sensors and distinguish environmental events from sensor errors. However, they are only used when we have more than one sensor.
Intro
The integration of safety-critical systems brings a deep reflection on how to guarantee safety and preserve society from an accident.
In contrast to typical software, ML control flows are specified by inscrutable weights and trained and tested pointwise using specific cases, which has limited effectiveness at improving and assessing an ML system’s completeness and coverage.
They rarely manage all test cases (uncertainty) and are susceptible to small changes in input (adversarial examples), even if they give us an higher confidence score for a prediction (score produced by the model that expresses how confident it is of the prediction correctness).
Flowchart
--- off-line phase ---
Select the most suitable dataset for our goal. It is assumed that it is used a trusted dataset to train, validate and test the model. So it is correctly labelled and the images for classes are balanced. Data augmentation techniques can be used to increase the amount of data.
2) Apply Ranger technique for hardware transient faults propagation prevention (e.g., bit-flip), which consists in applying limits to the ranges of the output values from the activation functions of the neural network.
3) Train the model.
4) Generate adversarial examples to increase the adversarial robustness by retraining the model with these examples.
5) Evaluate the model's performance. If we consider satisfactory, we can use it into the autonomous system. Otherwise it will be necessary to change model or refine the data, and then repeat the training process.
6) Use explainability methods on the model and the test set to evaluate the contribution of each input feature to the output assigning an importance score to each individual feature (each pixel).
7) Use this data to train a second model, with the same architecture as the first, which should give us more information on the output of the main model. Also in this case we evaluate the performance of the model and if it is not satisfactory we can consider adding further data.
--- on-line phase ---
8) Here the main model will be monitored during its real-time operation. The data from real world is going to be feed to the system and the model will return predictions with relative confidence score.
9) Each input will be passed to both models and the outputs for this input will be compared.
10.1) If both gave the same result and the estimated uncertainty is below a certain threshold, then we can trust the result of the main model and use this prediction for the subsequent operations of the system.
10.2) If the uncertainty is not higher than the threshold, but the two models give a different result then the result of the second model is kept if the confidence score is high enough, otherwise we must pass the control to the human.
10.3) If the uncertainty is high, the comparison between the two outputs is not taken into account and also in this case the control is given to the human.