In computer science, a queue (/ˈkjuː/ kyew) is a particular kind of abstract data type or collection in which the entities in the collection are kept in order and the principal (or only) operations on the collection are the addition of entities to the rear terminal position, known as enqueue, and removal of entities from the front terminal position, known as dequeue. This makes the queue a First-In-First-Out (FIFO) data structure. In a FIFO data structure, the first element added to the queue will be the first one to be removed. This is equivalent to the requirement that once a new element is added, all elements that were added before have to be removed before the new element can be removed. Often a peek or front operation is also entered, returning the value of the front element without dequeuing it. A queue is an example of a linear data structure, or more abstractly a sequential collection.
Queue is an abstract data structure, somewhat similar to Stacks. Unlike stacks, a queue is open at both its ends. One end is always used to insert data (enqueue) and the other is used to remove data (dequeue). Queue follows First-In-First-Out methodology, i.e., the data item stored first will be accessed first.
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
In computer science, a queue (/ˈkjuː/ kyew) is a particular kind of abstract data type or collection in which the entities in the collection are kept in order and the principal (or only) operations on the collection are the addition of entities to the rear terminal position, known as enqueue, and removal of entities from the front terminal position, known as dequeue. This makes the queue a First-In-First-Out (FIFO) data structure. In a FIFO data structure, the first element added to the queue will be the first one to be removed. This is equivalent to the requirement that once a new element is added, all elements that were added before have to be removed before the new element can be removed. Often a peek or front operation is also entered, returning the value of the front element without dequeuing it. A queue is an example of a linear data structure, or more abstractly a sequential collection.
Queue is an abstract data structure, somewhat similar to Stacks. Unlike stacks, a queue is open at both its ends. One end is always used to insert data (enqueue) and the other is used to remove data (dequeue). Queue follows First-In-First-Out methodology, i.e., the data item stored first will be accessed first.
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...sameer shah
"Join us for STATATHON, a dynamic 2-day event dedicated to exploring statistical knowledge and its real-world applications. From theory to practice, participants engage in intensive learning sessions, workshops, and challenges, fostering a deeper understanding of statistical methodologies and their significance in various fields."
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Enhanced Enterprise Intelligence with your personal AI Data Copilot.pdfGetInData
Recently we have observed the rise of open-source Large Language Models (LLMs) that are community-driven or developed by the AI market leaders, such as Meta (Llama3), Databricks (DBRX) and Snowflake (Arctic). On the other hand, there is a growth in interest in specialized, carefully fine-tuned yet relatively small models that can efficiently assist programmers in day-to-day tasks. Finally, Retrieval-Augmented Generation (RAG) architectures have gained a lot of traction as the preferred approach for LLMs context and prompt augmentation for building conversational SQL data copilots, code copilots and chatbots.
In this presentation, we will show how we built upon these three concepts a robust Data Copilot that can help to democratize access to company data assets and boost performance of everyone working with data platforms.
Why do we need yet another (open-source ) Copilot?
How can we build one?
Architecture and evaluation
Unleashing the Power of Data_ Choosing a Trusted Analytics Platform.pdfEnterprise Wired
In this guide, we'll explore the key considerations and features to look for when choosing a Trusted analytics platform that meets your organization's needs and delivers actionable intelligence you can trust.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
2. Queues
• Defini9on
of
a
Queue
• Examples
of
Queues
• Design
of
a
Queue
Class
• Different
Implementa9ons
of
the
Queue
Class
3. Introduction to Queues
• A queue is a waiting line
• It’s in daily life:
– A line of persons waiting to check out at a
supermarket
– A line of persons waiting to purchase a ticket for a
film
– A line of planes waiting to take off at an airport
– A line of vehicles at a toll booth
4. Defini*on
of
a
Queue
• A
queue
is
a
data
structure
that
models/enforces
the
first-‐come
first-‐serve
order,
or
equivalently
the
first-‐
in
first-‐out
(FIFO)
order.
• That
is,
the
element
that
is
inserted
first
into
the
queue
will
be
the
element
that
will
deleted
first,
and
the
element
that
is
inserted
last
is
deleted
last.
• A
wai9ng
line
is
a
good
real-‐life
example
of
a
queue.
(In
fact,
the
Bri9sh
word
for
“line”
is
“queue”.)
5. Queues
• Unlike stacks in which elements are popped and
pushed only at the ends of the list
Collection of data elements:
• items are removed from a queue at one end,
called the FRONT of the queue;
• and elements are added at the other end, called
the BACK
6. Introduction to Queues
• Difference between Stack and Queues:
– Stack exhibits last-in-first-out (LIFO)
– Queue exhibits first-in-first-out (FIFO)
7. The Queue Operations
❐ A
queue
is
like
a
line
of
people
wai9ng
for
a
bank
teller.
The
queue
has
a
front
and
a
rear.
$
$
Front
Rear
8. The Queue Operations
❐ New
people
must
enter
the
queue
at
the
rear.
This
is
called
an
enqueue
opera9on.
$
$
Front
Rear
9. The Queue Operations
❐ When
an
item
is
taken
from
the
queue,
it
always
comes
from
the
front.
This
is
called
a
dequeue
opera9on.
$
$
Front
Rear
10. Queue Abstract Data Type
• Basic operations
– Construct a queue
– Check if empty
– Enqueue (add element to back)
– Front (retrieve value of element from front)
– Dequeue (remove element from front)
11. A
Graphic
Model
of
a
Queue
Tail:
All
new
items
are
added
on
this
end
Head:
All
items
are
deleted
from
this
end
12. Opera*ons
on
Queues
• Insert(item):
(also
called
enqueue)
– It
adds
a
new
item
to
the
tail
of
the
queue
• Remove(
):
(also
called
delete
or
dequeue)
– It
deletes
the
head
item
of
the
queue,
and
returns
to
the
caller.
If
the
queue
is
already
empty,
this
opera9on
returns
NULL
• getHead(
):
– Returns
the
value
in
the
head
element
of
the
queue
• getTail(
):
– Returns
the
value
in
the
tail
element
of
the
queue
• isEmpty(
)
– Returns
true
if
the
queue
has
no
items
• size(
)
– Returns
the
number
of
items
in
the
queue
13. Queue
as
a
Class
• Much
like
stacks
and
linked
lists
were
designed
and
implemented
as
classes,
a
queue
can
be
conveniently
packaged
as
a
class
• It
seems
natural
to
think
of
a
queue
as
similar
to
a
linked
list,
but
with
more
basic
opera9ons,
to
enforce
the
FIFO
order
of
inser9on
and
dele9on
14. Designing and Building a Queue Class
Array-Based
• Consider an array in which to store a
queue
• Note additional variables needed
– myFront, myBack
• Picture a queue
object like this
19. Circular Queue
• Problems
– We quickly "walk off the end" of the array
• Possible solutions
– Shift array elements
– Use a circular queue
– Note that both empty
and full queue
gives myBack == myFront
20. Array Implementation
• A queue can be implemented with an array, as
shown here. For example, this queue contains the
integers 4 (at the front), 8 and 6 (at the rear).
[
0
]
[1]
[
2
]
[
3
]
[
4
]
[
5
]
.
.
.
An
array
of
integers
to
implement
a
queue
of
integers
4
8
6
We
don't
care
what's
in
this
part
of
the
array.
21. Array Implementation
• The easiest implementation also keeps
track of the number of items in the
queue and the index of the first
element (at the front of the queue), the
last element (at the rear).
[
0
]
[1]
[
2
]
[
3
]
[
4
]
[
5
]
.
.
.
4
8
6
size
3
first
0
last
2
22. A Dequeue Operation
• When an element leaves the queue,
size is decremented, and first changes,
too.
[
0
]
[1]
[
2
]
[
3
]
[
4
]
[
5
]
.
.
.
4
8
6
size
2
first
1
last
2
23. An Enqueue Operation
• When an element enters the queue,
size is incremented, and last changes,
too.
[
0
]
[1]
[
2
]
[
3
]
[
4
]
[
5
]
.
.
.
2
8
6
size
3
first
1
last
3
24. At the End of the Array
• There is special behavior at the end of
the array. For example, suppose we
want to add a new element to this
queue, where the last index is [5]:
[
0
]
[1]
[
2
]
[
3
]
[
4
]
[
5
]
2
1
6
size
3
first
3
last
5
25. At the End of the Array
• The new element goes at the front of
the array (if that spot isn’t already
used):
[
0
]
[1]
[
2
]
[
3
]
[
4
]
[
5
]
2
1
6
size
4
first
3
last
0
4
26. Array Implementation
• Easy to implement
• But it has a limited capacity with a fixed array
• Or you must use a dynamic array for an
unbounded capacity
• Special behavior is needed when the rear reaches
the end of the array.
[
0
]
[1]
[
2
]
[
3
]
[
4
]
[
5
]
.
.
.
4
8
6
size
3
first
0
last
2
27. Linked List Implementation
10
15
7
null
13
• A queue can also be implemented
with a linked list with both a
head and a tail pointer.
head_ptr
tail_ptr
28. Linked List Implementation
10
15
7
null
13
• Which end do you think is the
front of the queue? Why?
head_ptr
tail_ptr
29. Linked List Implementation
10
15
7
null
head_ptr
13
• The head_ptr points to the front
of the list.
• Because it is harder to remove
items from the tail of the list.
tail_ptr
Front
Rear
30. Abstract
Data
Type
• Abstract
Data
Type
as
a
design
tool
• Concerns
only
on
the
important
concept
or
model
• No
concern
on
implementa9on
details.
• Stack
&
Queue
is
an
example
of
ADT
• An
array
is
not
ADT.
31. What
is
the
difference?
• Stack
&
Queue
vs.
Array
– Arrays
are
data
storage
structures
while
stacks
and
queues
are
specialized
DS
and
used
as
programmer’s
tools.
• Stack
–
a
container
that
allows
push
and
pop
• Queue
-‐
a
container
that
allows
enqueue
and
dequeue
• No
concern
on
implementa9on
details.
• In
an
array
any
item
can
be
accessed,
while
in
these
data
structures
access
is
restricted.
• They
are
more
abstract
than
arrays.