SlideShare une entreprise Scribd logo
1  sur  21
Télécharger pour lire hors ligne
東京大学工学系研究科技術経営戦略学専攻松尾研究室
Generating Sentences from a Continuous Space
Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz and Samy Bengio
Presenter: Shuhei Iitsuka
東京大学工学系研究科技術経営戦略学専攻松尾研究室
Introduction
Variational Autoencoder for Natural Language Sentences.
● Recurrent Neural Network Language Model (RNNLM): the state-of-the-art generative model for
natural language.
● Drawback: word-by-word generation ---> cannot capture the global characteristics of a sentence.
● Brings the idea of Variational Autoencoder (VAE) into natural language sentences.
Not a representation vector,
but a continuous latent space.
東京大学工学系研究科技術経営戦略学専攻松尾研究室
Introduction: Contribution
● Proposed the variational autoencoder architecture for text sentence.
● Evaluated the performance on the language modeling task and the missing word imputing task
comparing to RNNLM.
● Conducted qualitative / quantitative analysis on the proposed model.
東京大学工学系研究科技術経営戦略学専攻松尾研究室
Google Poetry
“there is no one else in the world.
there is no one else in sight.
they were the only ones who mattered.
they were the only ones left.
he had to be with me.
she had to be with him.
i had to do this.
i wanted to kill him.
i started to cry.
i turned to him. --”
東京大学工学系研究科技術経営戦略学専攻松尾研究室
Agenda
● Introduction
● Background
○ Mapping a sentence to a latent space
○ Variational Autoencoder
● VAE for sentences
● Results
○ Language modeling
○ Imputing missing words
● Analysis
○ Dropout effect
○ Sampling from the posterior
○ Homotopies
● Conclusion
東京大学工学系研究科技術経営戦略学専攻松尾研究室
Background: Mapping a sentence to a latent space
Sequence Autoencoders
● Using seq2seq architecture as an autoencoder.
● Both encoder and decoder are RNNs.
Skip Thought Vector
● One encoder and two decoder
to predict previous sentence
and next sentence.
Paragraph Vector
● non-RNN model.
● Paragraph matrix D → sentence,
Vocabulary matrix W → word.
● Trains W to predict next words using D as contexts / memory.
[Kiros et al., 2015]
[Dai and Le, 2015]
[Le and Mikolov, 2014]
sentence_t
sentence_{t-1}
sentence_{t+1}
東京大学工学系研究科技術経営戦略学専攻松尾研究室
Background: Variational autoencoder
● A regularized version of the standard autoencoder.
● Learns inputs not as single points, but as regions in the latent space.
○ Replace the deterministic encoder φ_{enc} with a probability
distribution, q(z|x).
● KL divergence is introduced to make q(z|x) close to a prior p(z).
○ If learned with just a reconstruction loss, variances in q(z|x) get
extremely small (=q(z|x) becomes deterministic).
x^
x
z
encoder φ
decoder
x^
x
μ
encoder q(z|x)
decoder
σ
Standard autoencoder
Variational autoencoder
[Presenter Illustrated]
東京大学工学系研究科技術経営戦略学専攻松尾研究室
VAE for sentences
● Authors introduce the VAE for text which uses single-layer LSTM RNNs for both the encoder and
decoder.
● The decoder can be regarded as a special version of RNNLM conditioned by the hidden code.
● Explored some variations → no significant improvements.
○ Concatenating z every time step.
○ Soft-plus activation
○ Feedforward between (encoder, latent variable) and (decoder, latent variable). (?)
st-2 st-1 st
wt-2 wt-1 wt
U U U
W W
yt
V
Variational Sequence Autoencoder RNNLM [Presenter Illustrated]
東京大学工学系研究科技術経営戦略学専攻松尾研究室
VAE for sentences: similar models
● Variational Recurrent Autoencoder (VRAE) [Fabious and van Amersfoort, 2014]
● Continuous latent variables with RNN-style modeling [Bayer and Osendorfer, 2015][Chung et al., 2015]
→ Latent variables are separated per timestep.
→ Not suitable to capture the general sentence characteristics.
● VAE-based document-level language model [Miao et al., 2015]
→ Input texts are models as bags of words.
→ Not suitable to capture the sequential characteristics.
東京大学工学系研究科技術経営戦略学専攻松尾研究室
Optimization Challenges
● KL divergence tends to become zero. (= q(z|x) ~ p(x), which equals to RNNLM)
● Gives up capturing z and goes after explaining each sample with an optimized decoder.
● Authors introduce two techniques to migrate this issue.
○ KL cost annealing
○ Word dropout and historyless decoding
東京大学工学系研究科技術経営戦略学専攻松尾研究室
Optimization technique 1: KL cost annealing
Spike: encodes as much
information into z as it can.
Drop: starts to
smooth out its
encodings.
Rise: learns to
condense more
information into z.
東京大学工学系研究科技術経営戦略学専攻松尾研究室
Optimization technique 2: Word dropout and historyless decoding
Weaken the decoder by replacing a fraction of words with the generic unknown word token UNK.
<UNK> <UNK>
Forces the model
to rely on z for
good prediction.
Parameterized as a keep rate k ∈ [0, 1].
東京大学工学系研究科技術経営戦略学専攻松尾研究室
Results: Language modeling
Objective: Is a continuous latent variable helpful for a standard task (language modeling)?
Methods:
Baseline: RNNLM (non-variational)
Proposal: VAE (variational)
Dataset: Penn Treebank
Measurement: Likelihood
Baseline: True test likelihood
Proposal: Variational lower bound of likelihood
Disadvantage for VAE
Penn Treebank output example [Bui et al., 2010]
東京大学工学系研究科技術経営戦略学専攻松尾研究室
Results: Language modeling
NLL: negative log-likelihoods, PPL: perplexities. Lower is better for both. () shows the KL loss.
● VAE’s performance is slightly worse than RNNLM.
● Without optimization technique (KL annealing and
dropout), the performance becomes the same as RNNLM.
Inputless Decoder (== dropout keep rate = 0)
● KL loss becomes huge, but VAE shows better performance than RNNLM.
Note: 101 - 2 = 99 (<100)
for reconstruction loss.
東京大学工学系研究科技術経営戦略学専攻松尾研究室
Results: Imputing missing words
Claim: VAE’s global sentence features are suited to do the task of imputing missing words.
Computing the likelihood is intractable. (Need to calculate for every vocabulary V each step!)
→ Introduces adversarial evaluation for quantitative evaluation.
Adversarial evaluation: How well the output can deceive the classifier to train.
Example of imputing task and outputs from each model.
東京大学工学系研究科技術経営戦略学専攻松尾研究室
Results: Imputing missing words
Dataset: Book corpus (mostly fiction)... 80M sentences after pruning.
Classifier to train: A bag-of-unigrams & LSTM logistic regression classifier.
● VAE shows lower adversarial error (= much deceptive) than RNNLM.
● Negative log-likelihood is comparable.
○ RNNLM can make NLL lower by making “safe” answers.
(e.g. complete the sentence with “, he said.”)
→ RNNLM yields less diverse samples than VAE, but produces natural sentences by favoring generic
high-probability endings.
Adv. Error ∈ [0, 0.5]. 0.5 means all
imputed samples are detected (not
deceptive at all). 0 means all imputed
samples are not detected. Lower is
better.
東京大学工学系研究科技術経営戦略学専攻松尾研究室
Analysis: The impact of word dropout
The information stored in the
latent variable increases.
With the extreme
settings, the likelihood
decreases.
Cost
Too typical, not
much topics are
captured.
Far less typical.
Has a clear topic.
Ceases to be
grammatically
correct.
東京大学工学系研究科技術経営戦略学専攻松尾研究室
Analysis: Sampling from the posterior
● Analysis on reconstructed input texts.
○ Note: This is variational. It’s not re-outputting memorized samples.
● It seems capturing the length and topic of given sentences.
● As the sentence gets longer, the output becomes various.
東京大学工学系研究科技術経営戦略学専攻松尾研究室
Analysis: Homotopies
● Homotopy = linear interpolation. VAE can generate a sentence between two points in the latent
space!
● Outputs are grammatical and have consistent topic.
● Similar syntax and topic but flipped sentiment can be problematic sometimes.
Sequence autoencoder outputs: Variational autoencoder outputs:
Grammatically
incorrect
Grammatically
correct
Grammatically
correct
東京大学工学系研究科技術経営戦略学専攻松尾研究室
Analysis: Homotopies
convinced,
understood
unconvinced,
suspicious
[presenter interpretation]
東京大学工学系研究科技術経営戦略学専攻松尾研究室
Conclusion
This paper introduces the use of variational autoencoder for natural language sentences.
● Evaluated that novel techniques are useful for successful training
● Found that the model is effective for imputing missing words
● Showed the model can produce diverse sentences with smooth interoperation.
Future work
● Factorization of the latent space
● Incorporating conditional settings
● Learning sentence embeddings with semi-supervised fashion learning
● Going beyond adversarial evaluation to a fully adversarial training objective

Contenu connexe

Tendances

カップルが一緒にお風呂に入る割合をベイズ推定してみた
カップルが一緒にお風呂に入る割合をベイズ推定してみたカップルが一緒にお風呂に入る割合をベイズ推定してみた
カップルが一緒にお風呂に入る割合をベイズ推定してみた
hoxo_m
 
5分でわかるかもしれないglmnet
5分でわかるかもしれないglmnet5分でわかるかもしれないglmnet
5分でわかるかもしれないglmnet
Nagi Teramo
 

Tendances (20)

[DL輪読会]相互情報量最大化による表現学習
[DL輪読会]相互情報量最大化による表現学習[DL輪読会]相互情報量最大化による表現学習
[DL輪読会]相互情報量最大化による表現学習
 
人工知能を用いた医用画像処理技術
人工知能を用いた医用画像処理技術人工知能を用いた医用画像処理技術
人工知能を用いた医用画像処理技術
 
Domain Adaptation 発展と動向まとめ(サーベイ資料)
Domain Adaptation 発展と動向まとめ(サーベイ資料)Domain Adaptation 発展と動向まとめ(サーベイ資料)
Domain Adaptation 発展と動向まとめ(サーベイ資料)
 
[論文紹介] 機械学習システムの安全性における未解決な問題
[論文紹介] 機械学習システムの安全性における未解決な問題[論文紹介] 機械学習システムの安全性における未解決な問題
[論文紹介] 機械学習システムの安全性における未解決な問題
 
敵対的サンプル・摂動サーベイ
敵対的サンプル・摂動サーベイ敵対的サンプル・摂動サーベイ
敵対的サンプル・摂動サーベイ
 
Deep State Space Models for Time Series Forecasting の紹介
Deep State Space Models for Time Series Forecasting の紹介Deep State Space Models for Time Series Forecasting の紹介
Deep State Space Models for Time Series Forecasting の紹介
 
Semi supervised, weakly-supervised, unsupervised, and active learning
Semi supervised, weakly-supervised, unsupervised, and active learningSemi supervised, weakly-supervised, unsupervised, and active learning
Semi supervised, weakly-supervised, unsupervised, and active learning
 
科学と機械学習のあいだ:変量の設計・変換・選択・交互作用・線形性
科学と機械学習のあいだ:変量の設計・変換・選択・交互作用・線形性科学と機械学習のあいだ:変量の設計・変換・選択・交互作用・線形性
科学と機械学習のあいだ:変量の設計・変換・選択・交互作用・線形性
 
[DL Hacks]Simple Online Realtime Tracking with a Deep Association Metric
[DL Hacks]Simple Online Realtime Tracking with a Deep Association Metric[DL Hacks]Simple Online Realtime Tracking with a Deep Association Metric
[DL Hacks]Simple Online Realtime Tracking with a Deep Association Metric
 
Bayesian Neural Networks : Survey
Bayesian Neural Networks : SurveyBayesian Neural Networks : Survey
Bayesian Neural Networks : Survey
 
Optunaでおいしいコーヒーの淹れ方を探索して、データを解析してみる
Optunaでおいしいコーヒーの淹れ方を探索して、データを解析してみるOptunaでおいしいコーヒーの淹れ方を探索して、データを解析してみる
Optunaでおいしいコーヒーの淹れ方を探索して、データを解析してみる
 
カップルが一緒にお風呂に入る割合をベイズ推定してみた
カップルが一緒にお風呂に入る割合をベイズ推定してみたカップルが一緒にお風呂に入る割合をベイズ推定してみた
カップルが一緒にお風呂に入る割合をベイズ推定してみた
 
スタートアップが提案する2030年の材料開発 - 2022/11/11 QPARC講演
スタートアップが提案する2030年の材料開発 - 2022/11/11 QPARC講演スタートアップが提案する2030年の材料開発 - 2022/11/11 QPARC講演
スタートアップが提案する2030年の材料開発 - 2022/11/11 QPARC講演
 
合成変量とアンサンブル:回帰森と加法モデルの要点
合成変量とアンサンブル:回帰森と加法モデルの要点合成変量とアンサンブル:回帰森と加法モデルの要点
合成変量とアンサンブル:回帰森と加法モデルの要点
 
深層学習の数理
深層学習の数理深層学習の数理
深層学習の数理
 
5分でわかるかもしれないglmnet
5分でわかるかもしれないglmnet5分でわかるかもしれないglmnet
5分でわかるかもしれないglmnet
 
Icra2020 v2
Icra2020 v2Icra2020 v2
Icra2020 v2
 
情報検索とゼロショット学習
情報検索とゼロショット学習情報検索とゼロショット学習
情報検索とゼロショット学習
 
fastTextの実装を見てみた
fastTextの実装を見てみたfastTextの実装を見てみた
fastTextの実装を見てみた
 
Sliced Wasserstein距離と生成モデル
Sliced Wasserstein距離と生成モデルSliced Wasserstein距離と生成モデル
Sliced Wasserstein距離と生成モデル
 

Similaire à Generating sentences from a continuous space

Csr2011 june17 15_15_kaminski
Csr2011 june17 15_15_kaminskiCsr2011 june17 15_15_kaminski
Csr2011 june17 15_15_kaminski
CSR2011
 
Contrastive Learning with Adversarial Perturbations for Conditional Text Gene...
Contrastive Learning with Adversarial Perturbations for Conditional Text Gene...Contrastive Learning with Adversarial Perturbations for Conditional Text Gene...
Contrastive Learning with Adversarial Perturbations for Conditional Text Gene...
MLAI2
 

Similaire à Generating sentences from a continuous space (20)

[Paper Reading] Unsupervised Learning of Sentence Embeddings using Compositi...
[Paper Reading]  Unsupervised Learning of Sentence Embeddings using Compositi...[Paper Reading]  Unsupervised Learning of Sentence Embeddings using Compositi...
[Paper Reading] Unsupervised Learning of Sentence Embeddings using Compositi...
 
Tiancheng Zhao - 2017 - Learning Discourse-level Diversity for Neural Dialog...
Tiancheng Zhao - 2017 -  Learning Discourse-level Diversity for Neural Dialog...Tiancheng Zhao - 2017 -  Learning Discourse-level Diversity for Neural Dialog...
Tiancheng Zhao - 2017 - Learning Discourse-level Diversity for Neural Dialog...
 
Detecting paraphrases using recursive autoencoders
Detecting paraphrases using recursive autoencodersDetecting paraphrases using recursive autoencoders
Detecting paraphrases using recursive autoencoders
 
Introduction to Prolog
Introduction to PrologIntroduction to Prolog
Introduction to Prolog
 
Understanding Flamingo - DeepMind's VLM Architecture
Understanding Flamingo - DeepMind's VLM ArchitectureUnderstanding Flamingo - DeepMind's VLM Architecture
Understanding Flamingo - DeepMind's VLM Architecture
 
Word representations in vector space
Word representations in vector spaceWord representations in vector space
Word representations in vector space
 
CNN for modeling sentence
CNN for modeling sentenceCNN for modeling sentence
CNN for modeling sentence
 
Topic model an introduction
Topic model an introductionTopic model an introduction
Topic model an introduction
 
Master defence 2020 - Anastasiia Khaburska - Statistical and Neural Language ...
Master defence 2020 - Anastasiia Khaburska - Statistical and Neural Language ...Master defence 2020 - Anastasiia Khaburska - Statistical and Neural Language ...
Master defence 2020 - Anastasiia Khaburska - Statistical and Neural Language ...
 
Csr2011 june17 15_15_kaminski
Csr2011 june17 15_15_kaminskiCsr2011 june17 15_15_kaminski
Csr2011 june17 15_15_kaminski
 
PL Lecture 01 - preliminaries
PL Lecture 01 - preliminariesPL Lecture 01 - preliminaries
PL Lecture 01 - preliminaries
 
Enhancing Xtext for General Purpose Languages
Enhancing Xtext for General Purpose LanguagesEnhancing Xtext for General Purpose Languages
Enhancing Xtext for General Purpose Languages
 
Talk from NVidia Developer Connect
Talk from NVidia Developer ConnectTalk from NVidia Developer Connect
Talk from NVidia Developer Connect
 
Generating Assertion Code from OCL: A Transformational Approach Based on Simi...
Generating Assertion Code from OCL: A Transformational Approach Based on Simi...Generating Assertion Code from OCL: A Transformational Approach Based on Simi...
Generating Assertion Code from OCL: A Transformational Approach Based on Simi...
 
Efficient projections
Efficient projectionsEfficient projections
Efficient projections
 
Efficient projections
Efficient projectionsEfficient projections
Efficient projections
 
Language models
Language modelsLanguage models
Language models
 
Representation Learning of Vectors of Words and Phrases
Representation Learning of Vectors of Words and PhrasesRepresentation Learning of Vectors of Words and Phrases
Representation Learning of Vectors of Words and Phrases
 
Contrastive Learning with Adversarial Perturbations for Conditional Text Gene...
Contrastive Learning with Adversarial Perturbations for Conditional Text Gene...Contrastive Learning with Adversarial Perturbations for Conditional Text Gene...
Contrastive Learning with Adversarial Perturbations for Conditional Text Gene...
 
slides.pdf
slides.pdfslides.pdf
slides.pdf
 

Plus de Shuhei Iitsuka

Inferring win–lose product network from user behavior
Inferring win–lose product network from user behaviorInferring win–lose product network from user behavior
Inferring win–lose product network from user behavior
Shuhei Iitsuka
 
ウェブサイトで収益を得る
ウェブサイトで収益を得るウェブサイトで収益を得る
ウェブサイトで収益を得る
Shuhei Iitsuka
 
第3期キックオフ説明会+勉強会
第3期キックオフ説明会+勉強会 第3期キックオフ説明会+勉強会
第3期キックオフ説明会+勉強会
Shuhei Iitsuka
 
かんたん Twitter アプリをつくろう
かんたん Twitter アプリをつくろう かんたん Twitter アプリをつくろう
かんたん Twitter アプリをつくろう
Shuhei Iitsuka
 

Plus de Shuhei Iitsuka (20)

Online and offline handwritten chinese character recognition a comprehensive...
Online and offline handwritten chinese character recognition  a comprehensive...Online and offline handwritten chinese character recognition  a comprehensive...
Online and offline handwritten chinese character recognition a comprehensive...
 
Inferring win–lose product network from user behavior
Inferring win–lose product network from user behaviorInferring win–lose product network from user behavior
Inferring win–lose product network from user behavior
 
バリエーションの提示がもたらす長期的効果に着目したウェブサイト最適化手法 @第31回人工知能学会全国大会
バリエーションの提示がもたらす長期的効果に着目したウェブサイト最適化手法 @第31回人工知能学会全国大会バリエーションの提示がもたらす長期的効果に着目したウェブサイト最適化手法 @第31回人工知能学会全国大会
バリエーションの提示がもたらす長期的効果に着目したウェブサイト最適化手法 @第31回人工知能学会全国大会
 
Procedural modeling using autoencoder networks
Procedural modeling using autoencoder networksProcedural modeling using autoencoder networks
Procedural modeling using autoencoder networks
 
ウェブサイト最適化のためのバリエーション自動生成システム
ウェブサイト最適化のためのバリエーション自動生成システムウェブサイト最適化のためのバリエーション自動生成システム
ウェブサイト最適化のためのバリエーション自動生成システム
 
Improving the Sensitivity of Online Controlled Experiments by Utilizing Pre-E...
Improving the Sensitivity of Online Controlled Experiments by Utilizing Pre-E...Improving the Sensitivity of Online Controlled Experiments by Utilizing Pre-E...
Improving the Sensitivity of Online Controlled Experiments by Utilizing Pre-E...
 
Machine learning meets web development
Machine learning meets web developmentMachine learning meets web development
Machine learning meets web development
 
Python と Xpath で ウェブからデータをあつめる
Python と Xpath で ウェブからデータをあつめるPython と Xpath で ウェブからデータをあつめる
Python と Xpath で ウェブからデータをあつめる
 
リミックスからはじめる DTM 入門
リミックスからはじめる DTM 入門リミックスからはじめる DTM 入門
リミックスからはじめる DTM 入門
 
【DBDA 勉強会 2013 夏】Chapter 12: Bayesian Approaches to Testing a Point (‘‘Null’’...
【DBDA 勉強会 2013 夏】Chapter 12: Bayesian Approaches to Testing a Point (‘‘Null’’...【DBDA 勉強会 2013 夏】Chapter 12: Bayesian Approaches to Testing a Point (‘‘Null’’...
【DBDA 勉強会 2013 夏】Chapter 12: Bayesian Approaches to Testing a Point (‘‘Null’’...
 
Asia Trend Map: Forecasting “Cool Japan” Content Popularity on Web Data
Asia Trend Map: Forecasting “Cool Japan” Content Popularity on Web DataAsia Trend Map: Forecasting “Cool Japan” Content Popularity on Web Data
Asia Trend Map: Forecasting “Cool Japan” Content Popularity on Web Data
 
【DBDA 勉強会 2013 夏】Doing Bayesian Data Analysis Chapter 4: Bayes’ Rule
【DBDA 勉強会 2013 夏】Doing Bayesian Data Analysis Chapter 4: Bayes’ Rule【DBDA 勉強会 2013 夏】Doing Bayesian Data Analysis Chapter 4: Bayes’ Rule
【DBDA 勉強会 2013 夏】Doing Bayesian Data Analysis Chapter 4: Bayes’ Rule
 
UT Startup Gym で人生が変わった話
UT Startup Gym で人生が変わった話UT Startup Gym で人生が変わった話
UT Startup Gym で人生が変わった話
 
ウェブサイトで収益を得る
ウェブサイトで収益を得るウェブサイトで収益を得る
ウェブサイトで収益を得る
 
HTML で自己紹介ページをつくる
HTML で自己紹介ページをつくるHTML で自己紹介ページをつくる
HTML で自己紹介ページをつくる
 
データベースを使おう
データベースを使おうデータベースを使おう
データベースを使おう
 
ウェブサービスの企画とデザイン
ウェブサービスの企画とデザインウェブサービスの企画とデザイン
ウェブサービスの企画とデザイン
 
データベースを使おう
データベースを使おうデータベースを使おう
データベースを使おう
 
第3期キックオフ説明会+勉強会
第3期キックオフ説明会+勉強会 第3期キックオフ説明会+勉強会
第3期キックオフ説明会+勉強会
 
かんたん Twitter アプリをつくろう
かんたん Twitter アプリをつくろう かんたん Twitter アプリをつくろう
かんたん Twitter アプリをつくろう
 

Dernier

Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Victor Rentea
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 

Dernier (20)

FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 

Generating sentences from a continuous space