Describing latest research in visual reasoning, in particular visual question answering. Covering both images and videos. Dual-process theories approach. Relational memory.
Analyzing Text Preprocessing and Feature Selection Methods for Sentiment Anal...Nirav Raje
This was a research project for an undergraduate academic seminar. Analyzed the impact of various text preprocessing techniques, feature weighting (FF, FP, TF-IDF), feature selection (filters, wrappers, embedded), lemmatization, tokenization (unigram, bigram and 1-to-3-gram) on 3 open Twitter datasets.
Abstractive text summarization is nowadays one of the most important research topics in NLP. However, getting a deep understanding of what it is and also how it works requires a series of base pieces of knowledge that build on top of each other. This is the reason why this presentation will give audiences an overview of sequence-to-sequence with the acceleration of various versions of attention over the past few years. In addition, natural language generation (NLG) with the focusing on decoder techniques and its relevant problems will be reviewed, as a supportive factor to the light of the success of automatic summarization. Finally, the abstractive text summarization will be represented with potential approaches to tackle some hot issues in some latest research papers.
The document summarizes an event for the UMCU AI Methods Lab at Utrecht University Medical Center. The lab brings together experts from different departments to collaborate on developing and applying AI methods. It focuses on fundamental research questions around algorithmic fairness, causality, explainability and other topics. The lab aims to facilitate long-term collaborations between clinicians, methodologists and AI experts to advance healthcare AI development and implementation.
Describing latest research in visual reasoning, in particular visual question answering. Covering both images and videos. Dual-process theories approach. Relational memory.
Analyzing Text Preprocessing and Feature Selection Methods for Sentiment Anal...Nirav Raje
This was a research project for an undergraduate academic seminar. Analyzed the impact of various text preprocessing techniques, feature weighting (FF, FP, TF-IDF), feature selection (filters, wrappers, embedded), lemmatization, tokenization (unigram, bigram and 1-to-3-gram) on 3 open Twitter datasets.
Abstractive text summarization is nowadays one of the most important research topics in NLP. However, getting a deep understanding of what it is and also how it works requires a series of base pieces of knowledge that build on top of each other. This is the reason why this presentation will give audiences an overview of sequence-to-sequence with the acceleration of various versions of attention over the past few years. In addition, natural language generation (NLG) with the focusing on decoder techniques and its relevant problems will be reviewed, as a supportive factor to the light of the success of automatic summarization. Finally, the abstractive text summarization will be represented with potential approaches to tackle some hot issues in some latest research papers.
The document summarizes an event for the UMCU AI Methods Lab at Utrecht University Medical Center. The lab brings together experts from different departments to collaborate on developing and applying AI methods. It focuses on fundamental research questions around algorithmic fairness, causality, explainability and other topics. The lab aims to facilitate long-term collaborations between clinicians, methodologists and AI experts to advance healthcare AI development and implementation.
word sense disambiguation, wsd, thesaurus-based methods, dictionary-based methods, supervised methods, lesk algorithm, michael lesk, simplified lesk, corpus lesk, graph-based methods, word similarity, word relatedness, path-based similarity, information content, surprisal, resnik method, lin method, elesk, extended lesk, semcor, collocational features, bag-of-words features, the window, lexical semantics, computational semantics, semantic analysis in language technology.
A Simple Introduction to Word EmbeddingsBhaskar Mitra
In information retrieval there is a long history of learning vector representations for words. In recent times, neural word embeddings have gained significant popularity for many natural language processing tasks, such as word analogy and machine translation. The goal of this talk is to introduce basic intuitions behind these simple but elegant models of text representation. We will start our discussion with classic vector space models and then make our way to recently proposed neural word embeddings. We will see how these models can be useful for analogical reasoning as well applied to many information retrieval tasks.
This document provides an overview of Latent Dirichlet Allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. It defines key terminology for LDA including documents, words, topics, and distributions. The document then explains LDA's graphical model and generative process, which represents documents as mixtures over latent topics and generates words probabilistically from topics. Variational inference is introduced as an approach for approximating the intractable posterior distribution over topics and learning model parameters.
PEGASUS is a large Transformer-based model for abstractive text summarization. It uses a novel pre-training objective called gap-sentence generation (GSG) which masks sentences from input documents and trains the model to generate the missing sentences. GSG more closely resembles the downstream summarization task compared to other objectives. In experiments, PEGASUS achieved state-of-the-art results on 12 summarization datasets using GSG pre-training and outperformed other models when fine-tuned on limited data.
Representation Learning of Vectors of Words and PhrasesFelipe Moraes
Talk about representation learning using word vectors such as Word2Vec, Paragraph Vector. Also introduced to neural network language models. Expose some applications using NNLM such as sentiment analysis and information retrieval.
This document provides an overview of Bayes law, Bayesian networks, and latent Dirichlet allocation (LDA). It begins with an explanation of Bayes law and examples of how it can be used. Next, it defines Bayesian networks as probabilistic graphical models and provides examples. Finally, it introduces LDA as a statistical model for collections of discrete data like text corpora and explains how it can be used for topic modeling. The document includes mathematical notation and diagrams to illustrate key concepts.
lazy has a more negative connotation of criticism, while relaxed has a more positive connotation.
juicy has a more positive, tastier connotation while greasy has a more negative connotation.
victim has a more sympathetic connotation while loser has a more negative connotation.
The document summarizes key topics from ICASSP 2022, including general trends in speech and audio processing, self-supervised and contrastive learning approaches, security applications, and topics related to tasks like multilingualism and keyword spotting. Some of the main models and techniques discussed are Wav2vec, HuBERT, contrastive learning using Conformers, intermediate layer supervision in self-supervised learning, and anonymization of speech data for privacy.
PR-315: Taming Transformers for High-Resolution Image SynthesisHyeongmin Lee
요즘 Transformer 구조를 language랑 vision 관계 없이 여기저기 적용해보려는 시도가 매우 다양하게 이루어지고 있는데요, 그래서 이번주 제 발표에서는 이를 High-resolution image synthesis에 활용한, CVPR 2021 Oral Session에서 발표될 논문 하나를 소개해보려고 합니다!
** 방송 기기 문제로 이번 영상은 아이패드 필기 없이 진행됩니다!! **
논문 링크: https://arxiv.org/abs/2012.09841
영상 링크: https://youtu.be/GcbT0IGt0xE
Question Answering System using machine learning approachGarima Nanda
In a compact form, this is a presentation reflecting how the machine learning approach can be used for the effective and efficient interaction using classification techniques.
Causal Inference : Primer (2019-06-01 잔디콘)Minho Lee
- 2019-06-01 잔디컨퍼런스(잔디콘, @구글캠퍼스) 에서 발표한 자료입니다
- 데이터를 통해 인과관계를 추론하는 방법에 대해서 알아봅니다
- Potential Outcomes, Causal Graphical Models 에 대해 간단히 살펴봅니다
- 슬라이드 내에 오타가 있습니다 ㅠㅠ
- 22p, 28p : Perkson's 가 아니라 Berkson's Paradox 입니다
Transformer 이전(2015) Attention에 대해 Alignment를 이용한 Attention Mechnism을 Neural Machine Translation에 적용하여 Long Input Sequence에 대해 성능 개선을 보여줌
Attention Mechanism에 대해 Global Attention과 Local Attention 2가지 방법을 제시
최보경 : 실무자를 위한 인과추론 활용 - Best Practices
발표영상 https://youtu.be/wTPEZDc6fw4
---
PAP가 준비한 팝콘 시즌1에서 프로덕트와 함께 성장하는 데이터 실무자들의 이야기를 담았습니다.
---
PAP(Product Analytics Playground)는 프로덕트 데이터 분석에 대해 편안하게 이야기할 수 있는 커뮤니티입니다.
우리는 데이터 드리븐 프로덕트 문화를 더 많은 분들이 각자의 자리에서 이끌어갈 수 있도록 하는 것을 목표로 합니다.
다양한 직군의 사람들이 모여 프로덕트를 만들듯 PAP 역시 다양한 멤버로 구성되어 있으며, 여러분들의 참여로 만들어집니다.
---
공식 페이지 : https://playinpap.oopy.io
페이스북 그룹 : https://www.facebook.com/groups/talkinpap
팀블로그 : https://playinpap.github.io
Afin de réaliser un questionnaire clair et précis, il est nécessaire de bien utiliser les types de questions. Trois types de questions existent :
- questions fermées
- question ouvertes
- questions semi-ouvertes
C'est types de questions se traduisent dans le logiciel de questionnaire en ligne Drag'n Survey par :
- la question oui/non
- la question à choix multiples
- la barre de notation
- la matrice de notation
- la comparaison/notation d'images
- la question avec champs libre
- la question à champs libres multiples
Drag'n Survey est le premier logiciel en ligne gratuit proposant la fonction "Glisser-Déposer" pour la construction des questionnaire.
Drag'n Survey est accessible à tous à travers trois offres :
- Gratuite
- Plus
- Premium
word sense disambiguation, wsd, thesaurus-based methods, dictionary-based methods, supervised methods, lesk algorithm, michael lesk, simplified lesk, corpus lesk, graph-based methods, word similarity, word relatedness, path-based similarity, information content, surprisal, resnik method, lin method, elesk, extended lesk, semcor, collocational features, bag-of-words features, the window, lexical semantics, computational semantics, semantic analysis in language technology.
A Simple Introduction to Word EmbeddingsBhaskar Mitra
In information retrieval there is a long history of learning vector representations for words. In recent times, neural word embeddings have gained significant popularity for many natural language processing tasks, such as word analogy and machine translation. The goal of this talk is to introduce basic intuitions behind these simple but elegant models of text representation. We will start our discussion with classic vector space models and then make our way to recently proposed neural word embeddings. We will see how these models can be useful for analogical reasoning as well applied to many information retrieval tasks.
This document provides an overview of Latent Dirichlet Allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. It defines key terminology for LDA including documents, words, topics, and distributions. The document then explains LDA's graphical model and generative process, which represents documents as mixtures over latent topics and generates words probabilistically from topics. Variational inference is introduced as an approach for approximating the intractable posterior distribution over topics and learning model parameters.
PEGASUS is a large Transformer-based model for abstractive text summarization. It uses a novel pre-training objective called gap-sentence generation (GSG) which masks sentences from input documents and trains the model to generate the missing sentences. GSG more closely resembles the downstream summarization task compared to other objectives. In experiments, PEGASUS achieved state-of-the-art results on 12 summarization datasets using GSG pre-training and outperformed other models when fine-tuned on limited data.
Representation Learning of Vectors of Words and PhrasesFelipe Moraes
Talk about representation learning using word vectors such as Word2Vec, Paragraph Vector. Also introduced to neural network language models. Expose some applications using NNLM such as sentiment analysis and information retrieval.
This document provides an overview of Bayes law, Bayesian networks, and latent Dirichlet allocation (LDA). It begins with an explanation of Bayes law and examples of how it can be used. Next, it defines Bayesian networks as probabilistic graphical models and provides examples. Finally, it introduces LDA as a statistical model for collections of discrete data like text corpora and explains how it can be used for topic modeling. The document includes mathematical notation and diagrams to illustrate key concepts.
lazy has a more negative connotation of criticism, while relaxed has a more positive connotation.
juicy has a more positive, tastier connotation while greasy has a more negative connotation.
victim has a more sympathetic connotation while loser has a more negative connotation.
The document summarizes key topics from ICASSP 2022, including general trends in speech and audio processing, self-supervised and contrastive learning approaches, security applications, and topics related to tasks like multilingualism and keyword spotting. Some of the main models and techniques discussed are Wav2vec, HuBERT, contrastive learning using Conformers, intermediate layer supervision in self-supervised learning, and anonymization of speech data for privacy.
PR-315: Taming Transformers for High-Resolution Image SynthesisHyeongmin Lee
요즘 Transformer 구조를 language랑 vision 관계 없이 여기저기 적용해보려는 시도가 매우 다양하게 이루어지고 있는데요, 그래서 이번주 제 발표에서는 이를 High-resolution image synthesis에 활용한, CVPR 2021 Oral Session에서 발표될 논문 하나를 소개해보려고 합니다!
** 방송 기기 문제로 이번 영상은 아이패드 필기 없이 진행됩니다!! **
논문 링크: https://arxiv.org/abs/2012.09841
영상 링크: https://youtu.be/GcbT0IGt0xE
Question Answering System using machine learning approachGarima Nanda
In a compact form, this is a presentation reflecting how the machine learning approach can be used for the effective and efficient interaction using classification techniques.
Causal Inference : Primer (2019-06-01 잔디콘)Minho Lee
- 2019-06-01 잔디컨퍼런스(잔디콘, @구글캠퍼스) 에서 발표한 자료입니다
- 데이터를 통해 인과관계를 추론하는 방법에 대해서 알아봅니다
- Potential Outcomes, Causal Graphical Models 에 대해 간단히 살펴봅니다
- 슬라이드 내에 오타가 있습니다 ㅠㅠ
- 22p, 28p : Perkson's 가 아니라 Berkson's Paradox 입니다
Transformer 이전(2015) Attention에 대해 Alignment를 이용한 Attention Mechnism을 Neural Machine Translation에 적용하여 Long Input Sequence에 대해 성능 개선을 보여줌
Attention Mechanism에 대해 Global Attention과 Local Attention 2가지 방법을 제시
최보경 : 실무자를 위한 인과추론 활용 - Best Practices
발표영상 https://youtu.be/wTPEZDc6fw4
---
PAP가 준비한 팝콘 시즌1에서 프로덕트와 함께 성장하는 데이터 실무자들의 이야기를 담았습니다.
---
PAP(Product Analytics Playground)는 프로덕트 데이터 분석에 대해 편안하게 이야기할 수 있는 커뮤니티입니다.
우리는 데이터 드리븐 프로덕트 문화를 더 많은 분들이 각자의 자리에서 이끌어갈 수 있도록 하는 것을 목표로 합니다.
다양한 직군의 사람들이 모여 프로덕트를 만들듯 PAP 역시 다양한 멤버로 구성되어 있으며, 여러분들의 참여로 만들어집니다.
---
공식 페이지 : https://playinpap.oopy.io
페이스북 그룹 : https://www.facebook.com/groups/talkinpap
팀블로그 : https://playinpap.github.io
Afin de réaliser un questionnaire clair et précis, il est nécessaire de bien utiliser les types de questions. Trois types de questions existent :
- questions fermées
- question ouvertes
- questions semi-ouvertes
C'est types de questions se traduisent dans le logiciel de questionnaire en ligne Drag'n Survey par :
- la question oui/non
- la question à choix multiples
- la barre de notation
- la matrice de notation
- la comparaison/notation d'images
- la question avec champs libre
- la question à champs libres multiples
Drag'n Survey est le premier logiciel en ligne gratuit proposant la fonction "Glisser-Déposer" pour la construction des questionnaire.
Drag'n Survey est accessible à tous à travers trois offres :
- Gratuite
- Plus
- Premium
The document appears to be a quiz or game show about key events and innovations during the Industrial Revolution. It includes questions about inventions like the seed drill and blast furnace, industries like cotton and iron, and social changes such as policies around child labor. Each question has multiple choice answers and is associated with a cash prize amount, with the highest being $1 million for the final question.
13. La personnification consiste à attribuer des traits, des sentiments ou des comportements humains à une chose inanimée…
14. L’énumération (l’accumulation) consiste à aligner un grand nombre de mots ou de groupes de mots de même nature et de même fonction grammaticales, de manière à insister sur l'idée exprimée
15. L’énumération (l’accumulation) Il a acheté des légumes en boite, des champignons, de la sauce aux tomates, du pain, du lait, et de la confiture aux fraises…