ieee projects 2012, ieee 2012 projects, 2012 ieee projects, ieee projects 2012 for it with abstract, ieee projects titles 2012 for it, ieee final year projects 2012 for it
1. SEABIRDS SOLUTION
IEEE 2012 – 2013
SOFTWARE PROJECTS IN
VARIOUS DOMAINS
| JAVA | J2ME | J2EE |
DOTNET |MATLAB |NS2 |
SBGC SBGC
24/83, O Block, MMDA COLONY 4th FLOOR SURYA COMPLEX,
ARUMBAKKAM SINGARATHOPE BUS STOP,
CHENNAI-600106 OLD MADURAI ROAD, TRICHY-
620002
Web: www.ieeeproject.in
E-Mail: ieeeproject@hotmail.com
Trichy Chennai
Mobile: - 09003012150 Mobile: - 09944361169
Phone: - 0431-4012303
2. SBGC Provides IEEE 2012-2013 projects for all Final Year Students. We do assist the students
with Technical Guidance for two categories.
Category 1: Students with new project ideas / New or Old
IEEE Papers.
Category 2: Students selecting from our project list.
When you register for a project we ensure that the project is implemented to your fullest
satisfaction and you have a thorough understanding of every aspect of the project.
SBGC PROVIDES YOU THE LATEST IEEE 2012 PROJECTS / IEEE 2013 PROJECTS
FOR FOLLOWING DEPARTMENT STUDENTS
B.E, B.TECH, M.TECH, M.E, DIPLOMA, MS, BSC, MSC, BCA, MCA, MBA, BBA, PHD,
B.E (ECE, EEE, E&I, ICE, MECH, PROD, CSE, IT, THERMAL, AUTOMOBILE,
MECATRONICS, ROBOTICS) B.TECH(ECE, MECATRONICS, E&I, EEE, MECH , CSE, IT,
ROBOTICS) M.TECH(EMBEDDED SYSTEMS, COMMUNICATION SYSTEMS, POWER
ELECTRONICS, COMPUTER SCIENCE, SOFTWARE ENGINEERING, APPLIED
ELECTRONICS, VLSI Design) M.E(EMBEDDED SYSTEMS, COMMUNICATION
SYSTEMS, POWER ELECTRONICS, COMPUTER SCIENCE, SOFTWARE
ENGINEERING, APPLIED ELECTRONICS, VLSI Design) DIPLOMA (CE, EEE, E&I, ICE,
MECH,PROD, CSE, IT)
MBA(HR, FINANCE, MANAGEMENT, HOTEL MANAGEMENT, SYSTEM
MANAGEMENT, PROJECT MANAGEMENT, HOSPITAL MANAGEMENT, SCHOOL
MANAGEMENT, MARKETING MANAGEMENT, SAFETY MANAGEMENT)
We also have training and project, R & D division to serve the students and make them job
oriented professionals
3. PROJECT SUPPORTS AND DELIVERABLES
Free Course (JAVA & DOT NET)
Project Abstract
IEEE PAPER
IEEE Reference Papers, Materials &
Books in CD
PPT / Review Material
Project Report (All Diagrams & Screen
shots)
Working Procedures
Algorithm Explanations
Project Installation in Laptops
Project Certificate
4. TECHNOLOGY : JAVA
DOMAIN : IEEE TRANSACTIONS ON NETWORKING
S.NO TITLES ABSTRACT YEAR
1. Balancing the Trade- In This Project, we propose schemes to balance the trade- 2012
Offs between Query offs between data availability and query delay under
Delay and Data different system settings and requirements. Mobile nodes
Availability in in one partition are not able to access data hosted by
MANETs nodes in other partitions, and hence significantly degrade
the performance of data access. To deal with this
problem, We apply data replication techniques.
2. MeasuRouting: A In this paper we present a theoretical framework for 2012
Framework for MeasuRouting. Furthermore, as proofs-of-concept, we
Routing Assisted present synthetic and practical monitoring applications to
Traffic Monitoring showcase the utility enhancement achieved with
MeasuRouting.
3. Cooperative Profit We model Optimal cooperation using the theory of 2012
Sharing in Coalition- transferable payoff coalitional games. We show that the
Based Resource optimum cooperation strategy, which involves the
Allocation in Wireless acquisition, deployment, and allocation of the channels
Networks and base stations (to customers), can be computed as the
solution of a concave or an integer optimization. We next
show that the grand coalition is stable in many different
settings.
4. Bloom Cast: Efficient In this paper we propose Bloom Cast, an efficient and 2012
Full-Text Retrieval effective full-text retrieval scheme, in unstructured P2P
over Unstructured networks. Bloom Cast is effective because it guarantees
P2Ps with Guaranteed perfect recall rate with high probability.
Recall
5. On Optimizing We propose a novel overlay formation algorithm for 2012
Overlay Topologies unstructured P2P networks. Based on the file sharing
for Search in pattern exhibiting the power-law property, our proposal
Unstructured Peer-to- is unique in that it poses rigorous performance
Peer Networks guarantees.
6. An MDP-Based In this paper, we propose an automated Markov Decision 2012
Dynamic Optimization Process (MDP)-based methodology to prescribe optimal
Methodology for sensor node operation to meet application requirements
Wireless Sensor and adapt to changing environmental stimuli. Numerical
Networks results confirm the optimality of our proposed
methodology and reveal that our methodology more
closely meets application requirements compared to other
feasible policies.
7. Obtaining Provably In this paper, we address The Internet Topology 2012
Legitimate Internet Problems by providing a framework to generate small,
Topologies realistic, and policy-aware topologies. We propose HBR,
5. a novel sampling method, which exploits the inherent
hierarchy of the policy-aware Internet topology. We
formally prove that our approach generates connected
and legitimate topologies, which are compatible with the
policy-based routing conventions and rules.
8. Extrema Propagation: This paper introduces Extrema Propagation, a 2012
Fast Distributed probabilistic technique for distributed estimation of the
Estimation of Sums sum of positive real numbers. The technique relies on the
and Network Sizes exchange of duplicate insensitive messages and can be
applied in flood and/or epidemic settings, where
multipath routing occurs; it is tolerant of message loss; it
is fast, as the number of message exchange steps can be
made just slightly above the theoretical minimum; and it
is fully distributed, with no single point of failure and the
result produced at every node.
9. Latency Equalization We propose a Latency Equalization (LEQ) service, 2012
as a New Network which equalizes the perceived latency for all clients.
Service Primitive
10. Grouping-Enhanced This paper proposes a scheme, referred to as Grouping- 2012
Resilient Probabilistic enhanced Resilient Probabilistic En-route Filtering
En-Route Filtering of (GRPEF). In GRPEF, an efficient distributed algorithm is
Injected False Data in proposed to group nodes without incurring extra groups,
WSNs and a multi axis division based approach for deriving
location-aware keys is used to overcome the threshold
problem and remove the dependence on the sink
immobility and routing protocols.
11. On Achieving Group- A multicast scheme is stragegyproof if no receiver has 2012
Strategy proof incentive to lie about her true valuation. It is further
Multicast group strategyproof if no group of colluding receivers
has incentive to lie. We study multicast schemes that
target group strategyproofness, in both directed and
undirected networks.
12. Distributed -Optimal In this paper, we develop a framework for user 2012
User Association and association in infrastructure-based wireless networks,
Cell Load Balancing specifically focused on flow-level cell load balancing
in Wireless Networks under spatially in homogeneous traffic distributions. Our
work encompasses several different user association
policies: rate-optimal, throughput-optimal, delay-optimal,
and load-equalizing, which we collectively denote ?-
optimal user association.
13. Opportunistic Flow- In this paper, we propose Consistent Net Flow (CNF) 2012
Level Latency architecture for measuring per-flow delay measurements
Estimation Using within routers. CNF utilizes the existing Net Flow
Consistent Net Flow architecture that already reports the first and last
timestamps per flow, and it proposes hash-based
sampling to ensure that two adjacent routers record the
6. same flows.
14. Leveraging a Resource discovery is critical to the usability and 2012
Compound Graph- accessibility of grid computing systems. Distributed
Based DHT for Multi- Hash Table (DHT) has been applied to grid systems as a
Attribute Range distributed mechanism for providing scalable range-
Queries with query and multi-attribute resource discovery. Multi-
Performance Analysis DHT-based approaches depend on multiple DHT
networks with each network responsible for a single
attribute. Single-DHT-based approaches keep the
resource information of all attributes in a single node.
Both classes of approaches lead to high overhead. In this
paper, we propose a Low-Overhead Range-query Multi-
attribute (LORM) DHT-based resource discovery
approach. Unlike other DHT-based approaches, LORM
relies on a single compound graph-based DHT network
and distributes resource information among nodes in
balance by taking advantage of the compound graph
structure. Moreover, it has high capability to handle the
large-scale and dynamic characteristics of resources in
grids. Experimental results demonstrate the efficiency of
LORM in comparison with other resource discovery
approaches. LORM dramatically reduces maintenance
and resource discovery overhead. In addition, it yields
significant improvements in resource location efficiency.
We also analyze the performance of the LORM approach
rigorously by comparing it with other multi-DHT-based
and single-DHT-based approaches with respect to their
overhead and efficiency. The analytical results are
consistent with experimental results, and prove the
superiority of the LORM approach in theory
15. Exploiting Excess Excess capacity (EC) is the unused capacity in a 2012
Capacity to Improve network. We propose EC management techniques to
Robustness of WDM improve network performance. Our techniques exploit
Mesh Networks the EC in two ways. First, a connection preprovisioning
algorithm is used to reduce the connection setup time.
Second, whenever possible, we use protection schemes
that have higher availability and shorter protection
switching time. Specifically, depending on the amount of
EC available in the network, our proposed EC
management techniques dynamically migrate
connections between high-availability, high-backup-
capacity protection schemes and low-availability, low-
backup-capacity protection schemes. Thus, multiple
protection schemes can coexist in the network. The four
EC management techniques studied in this paper differ in
two respects: when the connections are migrated from
7. one protection scheme to another, and which connections
are migrated. Specifically, Lazy techniques migrate
connections only when necessary, whereas Proactive
techniques migrate connections to free up capacity in
advance. Partial Backup Reprovisioning (PBR)
techniques try to migrate a minimal set of connections,
whereas Global Backup Reprovisioning (GBR)
techniques migrate all connections. We develop integer
linear program (ILP) formulations and heuristic
algorithms for the EC management techniques. We then
present numerical examples to illustrate how the EC
management techniques improve network performance
by exploiting the EC in wavelength-division-
multiplexing (WDM) mesh networks
16. Revisiting Dynamic In unstructured peer-to-peer networks, the average 2012
Query Protocols in response latency and traffic cost of a query are two main
Unstructured Peer-to- performance metrics. Controlled-flooding resource query
Peer Networks algorithms are widely used in unstructured networks such
as peer-to-peer networks. In this paper, we propose a
novel algorithm named Selective Dynamic Query (SDQ).
Based on mathematical programming, SDQ calculates
the optimal combination of an integer TTL value and a
set of neighbors to control the scope of the next query.
Our results demonstrate that SDQ provides finer grained
control than other algorithms: its response latency is
close to the well-known minimum one via Expanding
Ring; in the mean time, its traffic cost is also close to the
minimum. To our best knowledge, this is the first work
capable of achieving a best trade-off between response
latency and traffic cost.
17. Adaptive A distributed adaptive opportunistic routing scheme for 2012
Opportunistic Routing multihop wireless ad hoc networks is proposed. The
for Wireless Ad Hoc proposed scheme utilizes a reinforcement learning
Networks framework to opportunistically route the packets even in
the absence of reliable knowledge about channel
statistics and network model. This scheme is shown to be
optimal with respect to an expected average per-packet
reward criterion. The proposed routing scheme jointly
addresses the issues of learning and routing in an
opportunistic context, where the network structure is
characterized by the transmission success probabilities.
In particular, this learning framework leads to a
stochastic routing scheme that optimally “explores” and
“exploits” the opportunities in the network.
18. Design, This load balancer improves both throughput and 2012
8. Implementation, and response time versus a single node, while exposing a
Performance of A single interface to external clients. The algorithm
Load Balancer for SIP achieves Transaction Least-Work-Left (TLWL),
Server Clusters – achieves its performance by integrating several features:
projects 2012 knowledge of the SIP protocol; dynamic estimates of
back-end server load; distinguishing transactions from
calls; recognizing variability in call length; and
exploiting differences in processing costs for different
SIP transactions.
19. Router Support for An increasing number of datacenter network 2012
Fine-Grained Latency applications, including automated trading and high-
Measurements – performance computing, have stringent end-to-end
projects 2012 latency requirements where even microsecond variations
may be intolerable. The resulting fine-grained
measurement demands cannot be met effectively by
existing technologies, such as SNMP, NetFlow, or active
probing. Instrumenting routers with a hash-based
primitive has been proposed that called as Lossy
Difference Aggregator (LDA) to measure latencies down
to tens of microseconds even in the presence of packet
loss. Because LDA does not modify or encapsulate the
packet, it can be deployed incrementally without changes
along the forwarding path. When compared to Poisson-
spaced active probing with similar overheads, LDA
mechanism delivers orders of magnitude smaller relative
error. Although ubiquitous deployment is ultimately
desired, it may be hard to achieve in the shorter term
20. A Framework for Monitoring transit traffic at one or more points in a 2012
Routing Assisted network is of interest to network operators for reasons of
Traffic Monitoring – traffic accounting, debugging or troubleshooting,
projects 2012 forensics, and traffic engineering. Previous research in
the area has focused on deriving a placement of monitors
across the network towards the end of maximizing the
monitoring utility of the network operator for a given
traffic routing. However, both traffic characteristics and
measurement objectives can dynamically change over
time, rendering a previously optimal placement of
monitors suboptimal. It is not feasible to dynamically
redeploy/reconfigure measurement infrastructure to cater
to such evolving measurement requirements. This
problem is addressed by strategically routing traffic sub-
populations over fixed monitors. This approach is
MeasuRouting. The main challenge for MeasuRouting is
to work within the constraints of existing intra-domain
traffic engineering operations that are geared for
efficiently utilizing bandwidth resources, or meeting
9. Quality of Service (QoS) constraints, or both. A
fundamental feature of intra-domain routing, that makes
MeasuRouting feasible, is that intra-domain routing is
often specified for aggregate flows. MeasuRouting, can
therefore, differentially route components of an
aggregate flow while ensuring that the aggregate
placement is compliant to original traffic engineering
objectives.
21. Independent In order to achieve resilient multipath routing we 2012
Directed Acyclic introduce the concept of Independent Directed Acyclic
Graphs for Resilient Graphs (IDAGs) in this study. Link-independent (Node-
Multipath Routing independent) DAGs satisfy the property that any path
from a source to the root on one DAG is link-disjoint
(node-disjoint) with any path from the source to the root
on the other DAG. Given a network, we develop
polynomial time algorithms to compute link-independent
and node-independent DAGs. The algorithm developed
in this paper: (1) provides multipath routing; (2) utilizes
all possible edges; (3) guarantees recovery from single
link failure; and (4) achieves all these with at most one
bit per packet as overhead when routing is based on
destination address and incoming edge. We show the
effectiveness of the proposed IDAGs approach by
comparing key performance indices to that of the
independent trees and multiple pairs of independent trees
techniques through extensive simulations
22. A Greedy Link Information-theoretic broadcast channels (BCs) and 2012
Scheduler for Wireless multiple-access channels (MACs) enable a single node to
Networks With transmit data simultaneously to multiple nodes, and
Gaussian Multiple- multiple nodes to transmit data simultaneously to a single
Access and Broadcast node, respectively. In this paper, we address the problem
Channels of link scheduling in multihop wireless networks
containing nodes with BC and MAC capabilities. We
first propose an interference model that extends protocol
interference models, originally designed for point-to-
point channels, to include the possibility of BCs and
MACs. Due to the high complexity of optimal link
schedulers, we introduce the Multiuser Greedy
Maximum Weight algorithm for link scheduling in
multihop wireless networks containing BCs and MACs.
Given a network graph, we develop new local pooling
conditions and show that the performance of our
algorithm can be fully characterized using the associated
parameter, the multiuser local pooling factor. We provide
examples of some network graphs, on which we apply
local pooling conditions and derive the multiuser local
10. pooling factor. We prove optimality of our algorithm in
tree networks and show that the exploitation of BCs and
MACs improve the throughput performance considerably
in multihop wireless networks.
23. A Quantization We consider rate optimization in multicast systems that 2012
Theoretic Perspective use several multicast trees on a communication network.
on Simulcast and The network is shared between different applications. For
Layered Multicast that reason, we model the available bandwidth for
Optimization multicast as stochastic. For specific network topologies,
we show that the multicast rate optimization problem is
equivalent to the optimization of scalar quantization. We
use results from rate-distortion theory to provide a bound
on the achievable performance for the multicast rate
optimization problem. A large number of receivers
makes the possibility of adaptation to changing network
conditions desirable in a practical system. To this end,
we derive an analytical solution to the problem that is
asymptotically optimal in the number of multicast trees.
We derive local optimality conditions, which we use to
describe a general class of iterative algorithms that give
locally optimal solutions to the problem. Simulation
results are provided for the multicast of an i.i.d. Gaussian
process, an i.i.d. Laplacian process, and a video source.
24. Bit Weaving A Non- Ternary Content Addressable Memories (TCAMs) have 2012
Prefix Approach to become the de facto standard in industry for fast packet
Compressing Packet classification. Unfortunately, TCAMs have limitations of
Classifiers in TCAMs small capacity, high power consumption, high heat
generation, and high cost. The well-known range
expansion problem exacerbates these limitations as each
classifier rule typically has to be converted to multiple
TCAM rules. One method for coping with these
limitations is to use compression schemes to reduce the
number of TCAM rules required to represent a classifier.
Unfortunately, all existing compression schemes only
produce prefix classifiers. Thus, they all miss the
compression opportunities created by non-prefix ternary
classifiers.
25. Cooperative Profit We consider a network in which several service 2012
Sharing in Coalition- providers offer wireless access service to their respective
Based Resource subscribed customers through potentially multi-hop
Allocation in Wireless routes. If providers cooperate, i.e., pool their resources,
Networks such as spectrum and base stations, and agree to serve
each others' customers, their aggregate payoffs, and
individual shares, can potentially substantially increase
through efficient utilization of resources and statistical
11. multiplexing. The potential of such cooperation can
however be realized only if each provider intelligently
determines who it would cooperate with, when it would
cooperate, and how it would share its resources during
such cooperation. Also, when the providers share their
aggregate revenues, developing a rational basis for such
sharing is imperative for the stability of the coalitions.
We model such cooperation using transferable payoff
coalitional game theory. We first consider the scenario
that locations of the base stations and the channels that
each provider can use have already been decided apriori.
We show that the optimum cooperation strategy, which
involves the allocations of the channels and the base
stations to mobile customers, can be obtained as
solutions of convex optimizations. We next show that the
grand coalition is stable in this case, i.e. if all providers
cooperate, there is always an operating point that
maximizes the providers' aggregate payoff, while
offering each such a share that removes any incentive to
split from the coalition. Next, we show that when the
providers can choose the locations of their base stations
and decide which channels to acquire, the above results
hold in important special cases. Finally, we examine
cooperation when providers do not share their payoffs,
but still share their resources so as to enhance individual
payoffs. We show that the grand coalition continues to be
stable.
26. CSMACN Carrier A wireless transmitter learns of a packet loss and infers 2012
Sense Multiple Access collision only after completing the entire transmission. If
With Collision the transmitter could detect the collision early [such as
Notification with carrier sense multiple access with collision detection
(CSMA/CD) in wired networks], it could immediately
abort its transmission, freeing the channel for useful
communication. There are two main hurdles to realize
CSMA/CD in wireless networks. First, a wireless
transmitter cannot simultaneously transmit and listen for
a collision. Second, any channel activity around the
transmitter may not be an indicator of collision at the
receiver. This paper attempts to approximate CSMA/CD
in wireless networks with a novel scheme called
CSMA/CN (collision notification). Under CSMA/CN,
the receiver uses PHY-layer information to detect a
collision and immediately notifies the transmitter. The
collision notification consists of a unique signature, sent
on the same channel as the data. The transmitter employs
a listener antenna and performs signature correlation to
12. discern this notification. Once discerned, the transmitter
immediately aborts the transmission. We show that the
notification signature can be reliably detected at the
listener antenna, even in the presence of a strong self-
interference from the transmit antenna. A prototype
testbed of 10 USRP/GNU Radios demonstrates the
feasibility and effectiveness of CSMA/CN.
27. Dynamic Power A major problem in wireless networks is coping with 2012
Allocation Under limited resources, such as bandwidth and energy. These
Arbitrary Varying issues become a major algorithmic challenge in view of
Channels—An Online the dynamic nature of the wireless domain. We consider
Approach in this paper the single-transmitter power assignment
problem under time-varying channels, with the objective
of maximizing the data throughput. It is assumed that the
transmitter has a limited power budget, to be sequentially
divided during the lifetime of the battery. We deviate
from the classic work in this area, which leads to explicit
"water-filling" solutions, by considering a realistic
scenario where the channel state quality changes
arbitrarily from one transmission to the other. The
problem is accordingly tackled within the framework of
competitive analysis, which allows for worst case
performance guarantees in setups with arbitrarily varying
channel conditions. We address both a "discrete" case,
where the transmitter can transmit only at a fixed power
level, and a "continuous" case, where the transmitter can
choose any power level out of a bounded interval. For
both cases, we propose online power-allocation
algorithms with proven worst-case performance bounds.
In addition, we establish lower bounds on the worst-case
performance of any online algorithm, and show that our
proposed algorithms are optimal.
28. Economic Issues in In designing and managing a shared infrastructure, one 2012
Shared Infrastructures must take account of the fact that its participants will
make self-interested and strategic decisions about the
resources that they are willing to contribute to it and/or
the share of its cost that they are willing to bear. Taking
proper account of the incentive issues that thereby arise,
we design mechanisms that, by eliciting appropriate
information from the participants, can obtain for them
maximal social welfare, subject to charging payments
that are sufficient to cover costs. We show that there are
incentivizing roles to be played both by the payments
that we ask from the participants and the specification of
how resources are to be shared. New in this paper is our
13. formulation of models for designing optimal
management policies, our analysis that demonstrates the
inadequacy of simple sharing policies, and our proposals
for some better ones. We learn that simple policies may
be far from optimal and that efficient policy design is not
trivial. However, we find that optimal policies have
simple forms in the limit as the number of participants
becomes large.
29. On New Approaches Society relies heavily on its networked physical 2012
of Assessing Network infrastructure and information systems. Accurately
Vulnerability assessing the vulnerability of these systems against
Hardness and disruptive events is vital for planning and risk
Approximation management. Existing approaches to vulnerability
assessments of large-scale systems mainly focus on
investigating inhomogeneous properties of the
underlying graph elements. These measures and the
associated heuristic solutions are limited in evaluating
the vulnerability of large-scale network topologies.
Furthermore, these approaches often fail to provide
performance guarantees of the proposed solutions. In this
paper, we propose a vulnerability measure, pairwise
connectivity, and use it to formulate network
vulnerability assessment as a graph-theoretical
optimization problem, referred to as -disruptor. The
objective is to identify the minimum set of critical
network elements, namely nodes and edges, whose
removal results in a specific degradation of the network
global pairwise connectivity. We prove the NP-
completeness and inapproximability of this problem and
propose an pseudo-approximation algorithm to
computing the set of critical nodes and an pseudo-
approximation algorithm for computing the set of critical
edges. The results of an extensive simulation-based
experiment show the feasibility of our proposed
vulnerability assessment framework and the efficiency of
the proposed approximation algorithms in comparison to
other approaches.
30. Quantifying Video- With the proliferation of multimedia content on the 2012
QoE Degradations of Internet, there is an increasing demand for video streams
Internet Links with high perceptual quality. The capability of present-
day Internet links in delivering high-perceptual-quality
streaming services, however, is not completely
understood. Link-level degradations caused by
intradomain routing policies and inter-ISP peering
policies are hard to obtain, as Internet service providers
often consider such information proprietary.
14. Understanding link-level degradations will enable us in
designing future protocols, policies, and architectures to
meet the rising multimedia demands. This paper presents
a trace-driven study to understand quality-of-experience
(QoE) capabilities of present-day Internet links using 51
diverse ISPs with a major presence in the US, Europe,
and Asia-Pacific. We study their links from 38 vantage
points in the Internet using both passive tracing and
active probing for six days. We provide the first
measurements of link-level degradations and case studies
of intra-ISP and inter-ISP peering links from a
multimedia standpoint. Our study offers surprising
insights into intradomain traffic engineering, peering link
loading, BGP, and the inefficiencies of using
autonomous system (AS)-path lengths as a routing
metric. Though our results indicate that Internet routing
policies are not optimized for delivering high-perceptual-
quality streaming services, we argue that alternative
strategies such as overlay networks can help meet QoE
demands over the Internet. Streaming services apart, our
Internet measurement results can be used as an input to a
variety of research problems.
31. Order Matters Modern wireless interfaces support a physical-layer 2012
Transmission capability called Message in Message (MIM). Briefly,
Reordering in MIM allows a receiver to disengage from an ongoing
Wireless Networks reception and engage onto a stronger incoming signal.
Links that otherwise conflict with each other can be
made concurrent with MIM. However, the concurrency is
not immediate and can be achieved only if conflicting
links begin transmission in a specific order. The
importance of link order is new in wireless research,
motivating MIM-aware revisions to link-scheduling
protocols. This paper identifies the opportunity in MIM-
aware reordering, characterizes the optimal improvement
in throughput, and designs a link-layer protocol for
enterprise wireless LANs to achieve it. Testbed and
simulation results confirm the performance gains of the
proposed system.
32. Static Routing and In this paper, we investigate the static multicast advance 2012
Wavelength reservation (MCAR) problem for all-optical wavelength-
Assignment for routed WDM networks. Under the advanced reservation
Multicast Advance traffic model, connection requests specify their start time
Reservation in All- to be some time in the future and also specify their
Optical Wavelength- holding times. We investigate the static MCAR problem
Routed WDM where the set of advance reservation requests is known
15. Networks ahead of time. We prove the MCAR problem is NP-
complete, formulate the problem mathematically as an
integer linear program (ILP), and develop three efficient
heuristics, seqRWA, ISH, and SA, to solve the problem
for practical size networks. We also introduce a
theoretical lower bound on the number of wavelengths
required. To evaluate our heuristics, we first compare
their performances to the ILP for small networks, and
then simulate them over real-world, large-scale networks.
We find the SA heuristic provides close to optimal
results compared to the ILP for our smaller networks, and
up to a 33% improvement over seqRWA and up to a 22%
improvement over ISH on realistic networks. SA
provides, on average, solutions 1.5-1.8 times the cost
given by our conservative lower bound on large
networks.
33. System-Level We consider a robust-optimization-driven system-level 2012
Optimization in approach to interference management in a cellular
Wireless Networks broadband system operating in an interference-limited
Managing Interference and highly dynamic regime. Here, base stations in
and Uncertainty via neighboring cells (partially) coordinate their transmission
Robust Optimization schedules in an attempt to avoid simultaneous max-
power transmission to their mutual cell edge. Limits on
communication overhead and use of the backhaul require
base station coordination to occur at a slower timescale
than the customer arrival process. The central challenge
is to properly structure coordination decisions at the slow
timescale, as these subsequently restrict the actions of
each base station until the next coordination period.
Moreover, because coordination occurs at the slower
timescale, the statistics of the arriving customers, e.g.,
the load, are typically only approximately known-thus,
this coordination must be done with only approximate
knowledge of statistics. We show that performance of
existing approaches that assume exact knowledge of
these statistics can degrade rapidly as the uncertainty in
the arrival process increases. We show that a two-stage
robust optimization framework is a natural way to model
two-timescale decision problems. We provide tractable
formulations for the base-station coordination problem
and show that our formulation is robust to fluctuations
(uncertainties) in the arriving load. This tolerance to load
fluctuation also serves to reduce the need for frequent
reoptimization across base stations, thus helping
minimize the communication overhead required for
system-level interference reduction. Our robust
16. optimization formulations are flexible, allowing us to
control the conservatism of the solution. Our simulations
show that we can build in robustness without significant
degradation of nominal performance.
34. The Case for Feed- Variable latencies due to communication delays or 2012
Forward Clock system noise is the central challenge faced by time-
Synchronization keeping algorithms when synchronizing over the
network. Using extensive experiments, we explore the
robustness of synchronization in the face of both normal
and extreme latency variability and compare the
feedback approaches of ntpd and ptpd (a software
implementation of IEEE-1588) to the feed-forward
approach of the RADclock and advocate for the benefits
of a feed-forward approach. Noting the current lack of
kernel support, we present extensions to existing
mechanisms in the Linux and FreeBSD kernels giving
full access to all available raw counters, and then
evaluate the TSC, HPET, and ACPI counters' suitability
as hardware timing sources. We demonstrate how the
RADclock achieves the same microsecond accuracy with
each counter.
TECHNOLOGY : JAVA
DOMAIN : IEEE TRANSACTIONS ON NETWORK SECURITY
S.NO TITLES ABSTRACT YEAR
1. Design and We have designed and implemented TARF, a robust 2012
Implementation of trust-aware routing framework for dynamic wireless
TARF: A Trust-Aware sensor networks (WSN). Without tight time
Routing Framework synchronization or known Geographic information,
for WSNs TARF provides trustworthy and energy-efficient route.
Most importantly, TARF proves effective against those
harmful attacks developed out of identity deception; the
resilience of TARF is verified through extensive
evaluation with both simulation and empirical
experiments on large-scale WSNs under various
scenarios including mobile and RF-shielding network
conditions.
2. Risk-Aware In this paper, we propose a risk-aware response 2012
Mitigation for mechanism to systematically Cope with the identified
MANET Routing routing attacks. Our risk-aware approach is based on an
Attacks extended Dempster-Shafer mathematical theory of
17. Evidence introducing a notion of importance factors.
3. Survivability In this paper, we study survivability issues for RFID. We 2012
Experiment and first present an RFID survivability experiment to define a
Attack foundation to measure the degree of survivability of an
Characterization for RFID system under varying attacks. Then we model a
RFID series of malicious scenarios using stochastic process
algebras and study the different effects of those attacks
on the ability of the RFID system to provide critical
services even when parts of the system have been
damaged.
4. Detecting and In this paper, we represent an 2012
Resolving Firewall innovative policy anomaly management framework
Policy Anomalies for firewalls, adopting a rule-based segmentation
technique to identify policy anomalies and derive
effective anomaly resolutions. In particular, we articulate
a grid-based representation technique, providing an
intuitive cognitive sense about policy anomaly.
5. Automatic In this paper, we present a complete solution for 2012
Reconfiguration for dynamically changing system membership in a large-
Large-Scale Reliable scale Byzantine-fault-tolerant system. We present a
Storage Systems service that tracks system membership and periodically
notifies other system nodes of changes.
6. Detecting Anomalous In this paper, we introduce the community anomaly 2012
Insiders in detection system (CADS), an unsupervised learning
Collaborative framework to detect insider threats based on the access
Information Systems logs of collaborative environments. The framework is
based on the observation that typical CIS users tend to
form community structures based on the subjects
accessed
7. An Extended Visual Conventional visual secret sharing schemes generate 2012
Cryptography noise-like random pixels on shares to hide secret images.
Algorithm for General It suffers a management problem. In this paper, we
Access Structures propose a general approach to solve the above-
mentioned problems; the approach can be used for binary
secret images in non computer-aided decryption
environments.
8. Mitigating Distributed In this paper, we extend port-hopping to support 2012
Denial of Service multiparty applications, by proposing the BIGWHEEL
Attacks in Multiparty algorithm, for each application server to communicate
Applications in the with multiple clients in a port-hopping manner without
Presence of Clock the need for group synchronization. Furthermore, we
Drift present an adaptive algorithm, HOPERAA, for enabling
hopping in the presence of bounded asynchrony, namely,
when the communicating parties have clocks with clock
drifts.
9. On the Security and Content distribution via network coding has received a 2012
18. Efficiency of Content lot of attention lately. However, direct application of
Distribution via network coding may be insecure. In particular, attackers
Network Coding can inject "bogus” data to corrupt the content distribution
process so as to hinder the information dispersal or even
deplete the network resource. Therefore, content
verification is an important and practical issue when
network coding is employed.
10. Packet-Hiding In this paper, we address the problem of selective 2012
Methods for jamming attacks in wireless networks. In these attacks,
Preventing Selective the adversary is active only for a short period of time,
Jamming Attacks selectively targeting messages of high importance.
11. Stochastic Model of 2012
Multi virus Dynamics
12. Peering Equilibrium Our scheme relies on a game theory modeling, with a 2012
Multipath Routing: A non-cooperative potential game considering both routing
Game Theory and congestions costs. We compare different PEMP
Framework for policies to BGP Multipath schemes by emulating a
Internet Peering realistic peering scenario.
Settlements
13. Modeling and Our scheme uses the Power Spectral Density (PSD) 2012
Detection of distribution of the scan traffic volume and its
Camouflaging Worm corresponding Spectral Flatness Measure (SFM) to
distinguish the C-Worm traffic from background traffic.
The performance data clearly demonstrates that our
scheme can effectively detect the C-Worm
propagation.two heuristic algorithms for the two sub
problems.
14. Analysis of a Botnet We present the design of an advanced hybrid peer-to- 2012
Takeover peer botnet. Compared with current botnets, the proposed
botnet is harder to be shut down, monitored, and
hijacked. It provides individualized encryption and
control traffic dispersion.
15. Efficient Network As real-time traffic such as video or voice increases on 2012
Modification to the Internet, ISPs are required to provide stable quality as
Improve QoS Stability well as connectivity at failures. For ISPs, how to
at Failures effectively improve the stability of these qualities at
failures with the minimum investment cost is an
important issue, and they need to effectively select a
limited number of locations to add link facilities.
16. Detecting Spam Compromised machines are one of the key security 2012
Zombies by threats on the Internet; they are often used to launch
Monitoring Outgoing various security attacks such as spamming and spreading
Messages malware, DDoS, and identity theft. Given that spamming
provides a key economic incentive for attackers to recruit
the large number of compromised machines, we focus on
the detection of the compromised machines in a network
19. that are involved in the spamming activities, commonly
known as spam zombies. We develop an effective spam
zombie detection system named SPOT by monitoring
outgoing messages of a network. SPOT is designed based
on a powerful statistical tool called Sequential
Probability Ratio Test, which has bounded false positive
and false negative error rates. In addition, we also
evaluate the performance of the developed SPOT system
using a two-month e-mail trace collected in a large US
campus network. Our evaluation studies show that SPOT
is an effective and efficient system in automatically
detecting compromised machines in a network. For
example, among the 440 internal IP addresses observed
in the e-mail trace, SPOT identifies 132 of them as being
associated with compromised machines. Out of the 132
IP addresses identified by SPOT, 126 can be either
independently confirmed (110) or highly likely (16) to be
compromised. Moreover, only seven internal IP
addresses associated with compromised machines in the
trace are missed by SPOT. In addition, we also compare
the performance of SPOT with two other spam zombie
detection algorithms based on the number and percentage
of spam messages originated or forwarded by internal
machines, respectively, and show that SPOT outperforms
these two detection algorithms.
17. A Hybrid Approach to Real-world entities are not always represented by the 2012
Private Record same set of features in different data sets. Therefore,
Matching Network matching records of the same real-world entity
Security 2012 Java distributed across these data sets is a challenging task. If
the data sets contain private information, the problem
becomes even more difficult. Existing solutions to this
problem generally follow two approaches: sanitization
techniques and cryptographic techniques. We propose a
hybrid technique that combines these two approaches and
enables users to trade off between privacy, accuracy, and
cost. Our main contribution is the use of a blocking phase
that operates over sanitized data to filter out in a privacy-
preserving manner pairs of records that do not satisfy the
matching condition. We also provide a formal definition
of privacy and prove that the participants of our protocols
learn nothing other than their share of the result and what
can be inferred from their share of the result, their input
and sanitized views of the input data sets (which are
considered public information). Our method incurs
considerably lower costs than cryptographic techniques
20. and yields significantly more accurate matching results
compared to sanitization techniques, even when privacy
requirements are high.
18. ES-MPICH2: A An increasing number of commodity clusters are 2012
Message Passing connected to each other by public networks, which have
Interface with become a potential threat to security sensitive parallel
Enhanced Security applications running on the clusters. To address this
Network Security security issue, we developed a Message Passing Interface
2012 Java (MPI) implementation to preserve confidentiality of
messages communicated among nodes of clusters in an
unsecured network. We focus on MPI rather than other
protocols, because MPI is one of the most popular
communication protocols for parallel computing on
clusters. Our MPI implementation—called ES-
MPICH2—was built based on MPICH2 developed by the
Argonne National Laboratory. Like MPICH2, ES-
MPICH2 aims at supporting a large variety of
computation and communication platforms like
commodity clusters and high-speed networks. We
integrated encryption and decryption algorithms into the
MPICH2 library with the standard MPI interface and;
thus, data confidentiality of MPI applications can be
readily preserved without a need to change the source
codes of the MPI applications. MPI-application
programmers can fully configure any confidentiality
services in MPICHI2, because a secured configuration
file in ES-MPICH2 offers the programmers flexibility in
choosing any cryptographic schemes and keys
seamlessly incorporated in ES-MPICH2. We used the
Sandia Micro Benchmark and Intel MPI Benchmark
suites to evaluate and compare the performance of ES-
MPICH2 with the original MPICH2 version. Our
experiments show that overhead incurred by the
confidentiality services in ES-MPICH2 is marginal for
small messages. The security overhead in ES-MPICH2
becomes more pronounced with larger messages. Our
results also show that security overhead can be
significantly reduced in ES-MPICH2 by high-
performance clusters.
19. Ensuring Distributed Cloud computing enables highly scalable services to be 2012
Accountability for easily consumed over the Internet on an as-needed basis.
Data Sharing in the A major feature of the cloud services is that users’ data
Cloud are usually processed remotely in unknown machines
that users do not own or operate. While enjoying the
convenience brought by this new emerging technology,
users’ fears of losing control of their own data
21. (particularly, financial and health data) can become a
significant barrier to the wide adoption of cloud services.
To address this problem, here, we propose a novel highly
decentralized information accountability framework to
keep track of the actual usage of the users’ data in the
cloud. In particular, we propose an object-centered
approach that enables enclosing our logging mechanism
together with users’ data and policies. We leverage the
JAR programmable capabilities to both create a dynamic
and traveling object, and to ensure that any access to
users’ data will trigger authentication and automated
logging local to the JARs. To strengthen user’s control,
we also provide distributed auditing mechanisms. We
provide extensive experimental studies that demonstrate
the efficiency and effectiveness of the proposed
approaches.
20. BECAN: A Injecting false data attack is a well known serious threat 2012
Bandwidth-Efficient to wireless sensor network, for which an adversary
Cooperative reports bogus information to sink causing error decision
Authentication at upper level and energy waste in en-route nodes. In this
Scheme for Filtering paper, we propose a novel bandwidth-efficient
Injected False Data in cooperative authentication (BECAN) scheme for filtering
Wireless Sensor injected false data. Based on the random graph
Networks – projects characteristics of sensor node deployment and the
2012 cooperative bit-compressed authentication technique, the
proposed BECAN scheme can save energy by early
detecting and filtering the majority of injected false data
with minor extra overheads at the en-route nodes. In
addition, only a very small fraction of injected false data
needs to be checked by the sink, which thus largely
reduces the burden of the sink. Both theoretical and
simulation results are given to demonstrate the
effectiveness of the proposed scheme in terms of high
filtering probability and energy saving.
21. A Flexible Approach There is an increasing need for fault tolerance 2012
to Improving System capabilities in logic devices brought about by the scaling
Reliability with of transistors to ever smaller geometries. This paper
Virtual Lockstep presents a hypervisor-based replication approach that can
be applied to commodity hardware to allow for virtually
lockstepped execution. It offers many of the benefits of
hardware-based lockstep while being cheaper and easier
to implement and more flexible in the configurations
supported. A novel form of processor state fingerprinting
is also presented, which can significantly reduce the fault
detection latency. This further improves reliability by
triggering rollback recovery before errors are recorded to
22. a checkpoint. The mechanisms are validated using a full
prototype and the benchmarks considered indicate an
average performance overhead of approximately 14
percent with the possibility for significant optimization.
Finally, a unique method of using virtual lockstep for
fault injection testing is presented and used to show that
significant detection latency reduction is achievable by
comparing only a small amount of data across replicas
22. A Learning-Based Despite the conventional wisdom that proactive security 2012
Approach to Reactive is superior to reactive security, we show that reactive
Security security can be competitive with proactive security as
long as the reactive defender learns from past attacks
instead of myopically overreacting to the last attack. Our
game-theoretic model follows common practice in the
security literature by making worst case assumptions
about the attacker: we grant the attacker complete
knowledge of the defender's strategy and do not require
the attacker to act rationally. In this model, we bound the
competitive ratio between a reactive defense algorithm
(which is inspired by online learning theory) and the best
fixed proactive defense. Additionally, we show that,
unlike proactive defenses, this reactive strategy is robust
to a lack of information about the attacker's incentives
and knowledge
23. Automated Security Despite the conventional wisdom that proactive security 2012
Test Generation with is superior to reactive security, we show that reactive
Formal Threat Models security can be competitive with proactive security as
long as the reactive defender learns from past attacks
instead of myopically overreacting to the last attack. Our
game-theoretic model follows common practice in the
security literature by making worst case assumptions
about the attacker: we grant the attacker complete
knowledge of the defender's strategy and do not require
the attacker to act rationally. In this model, we bound the
competitive ratio between a reactive defense algorithm
(which is inspired by online learning theory) and the best
fixed proactive defense. Additionally, we show that,
unlike proactive defenses, this reactive strategy is robust
to a lack of information about the attacker's incentives
and knowledge.
24. Automatic Byzantine-fault-tolerant replication enhances the 2012
Reconfiguration for availability and reliability of Internet services that store
Large-Scale Reliable critical state and preserve it despite attacks or software
Storage Systems errors. However, existing Byzantine-fault-tolerant
storage systems either assume a static set of replicas, or
23. have limitations in how they handle reconfigurations
(e.g., in terms of the scalability of the solutions or the
consistency levels they provide). This can be problematic
in long-lived, large-scale systems where system
membership is likely to change during the system
lifetime. In this paper, we present a complete solution for
dynamically changing system membership in a large-
scale Byzantine-fault-tolerant system. We present a
service that tracks system membership and periodically
notifies other system nodes of membership changes. The
membership service runs mostly automatically, to avoid
human configuration errors; is itself Byzantine-fault-
tolerant and reconfigurable; and provides applications
with a sequence of consistent views of the system
membership. We demonstrate the utility of this
membership service by using it in a novel distributed
hash table called dBQS that provides atomic semantics
even across changes in replica sets. dBQS is interesting
in its own right because its storage algorithms extend
existing Byzantine quorum protocols to handle changes
in the replica set, and because it differs from previous
DHTs by providing Byzantine fault tolerance and
offering strong semantics. We implemented the
membership service and dBQS. Our results show that the
approach works well, in practice: the membership service
is able to manage a large system and the cost to change
the system membership is low.
25. JS-Reduce Defending Web queries, credit card transactions, and medical 2012
Your Data from records are examples of transaction data flowing in
Sequential corporate data stores, and often revealing associations
Background between individuals and sensitive information. The serial
Knowledge Attacks release of these data to partner institutions or data
analysis centers in a nonaggregated form is a common
situation. In this paper, we show that correlations among
sensitive values associated to the same individuals in
different releases can be easily used to violate users'
privacy by adversaries observing multiple data releases,
even if state-of-the-art privacy protection techniques are
applied. We show how the above sequential background
knowledge can be actually obtained by an adversary, and
used to identify with high confidence the sensitive values
of an individual. Our proposed defense algorithm is
based on Jensen-Shannon divergence; experiments show
its superiority with respect to other applicable solutions.
To the best of our knowledge, this is the first work that
systematically investigates the role of sequential
24. background knowledge in serial release of transaction
data.
26. Mitigating DistributedNetwork-based applications commonly open some 2012
Denial of Service known communication port(s), making themselves easy
Attacks in Multiparty targets for (distributed) Denial of Service (DoS) attacks.
Applications in the Earlier solutions for this problem are based on port-
Presence of Clock hopping between pairs of processes which are
Drifts synchronous or exchange acknowledgments. However,
acknowledgments, if lost, can cause a port to be open for
longer time and thus be vulnerable, while time servers
can become targets to DoS attack themselves. Here, we
extend port-hopping to support multiparty applications,
by proposing the BIGWHEEL algorithm, for each
application server to communicate with multiple clients
in a port-hopping manner without the need for group
synchronization. Furthermore, we present an adaptive
algorithm, HOPERAA, for enabling hopping in the
presence of bounded asynchrony, namely, when the
communicating parties have clocks with clock drifts. The
solutions are simple, based on each client interacting
with the server independently of the other clients,
without the need of acknowledgments or time server(s).
Further, they do not rely on the application having a
fixed port open in the beginning, neither do they require
the clients to get a "first-contact” port from a third party.
We show analytically the properties of the algorithms
and also study experimentally their success rates,
confirm the relation with the analytical bounds.
27. On the Security and Content distribution via network coding has received a 2012
Efficiency of Content lot of attention lately. However, direct application of
Distribution via network coding may be insecure. In particular, attackers
Network Coding can inject "bogus” data to corrupt the content distribution
process so as to hinder the information dispersal or even
deplete the network resource. Therefore, content
verification is an important and practical issue when
network coding is employed. When random linear
network coding is used, it is infeasible for the source of
the content to sign all the data, and hence, the traditional
"hash-and-sign” methods are no longer applicable.
Recently, a new on-the-fly verification technique has
been proposed by Krohn et al. (IEEE S&P '04), which
employs a classical homomorphic hash function.
However, this technique is difficult to be applied to
network coding because of high computational and
communication overhead. We explore this issue further
25. by carefully analyzing different types of overhead, and
propose methods to help reducing both the computational
and communication cost, and provide provable security
at the same time.
28. Security of Bertino- Recently, Bertino, Shang and Wagstaff proposed a time- 2012
Shang-Wagstaff Time- bound hierarchical key management scheme for secure
Bound Hierarchical broadcasting. Their scheme is built on elliptic curve
Key Management cryptography and implemented with tamper-resistant
Scheme for Secure devices. In this paper, we present two collusion attacks
Broadcasting on Bertino-Shang-Wagstaff scheme. The first attack does
not need to compromise any decryption device, while the
second attack requires to compromise single decryption
device only. Both attacks are feasible and effective.
29. Survivability Radio Frequency Identification (RFID) has been 2012
Experiment and developed as an important technique for many high
Attack security and high integrity settings. In this paper, we
Characterization for study survivability issues for RFID. We first present an
RFID RFID survivability experiment to define a foundation to
measure the degree of survivability of an RFID system
under varying attacks. Then we model a series of
malicious scenarios using stochastic process algebras and
study the different effects of those attacks on the ability
of the RFID system to provide critical services even
when parts of the system have been damaged. Our
simulation model relates its statistic to the attack
strategies and security recovery. The model helps system
designers and security specialists to identify the most
devastating attacks given the attacker's capacities and the
system's recovery abilities. The goal is to improve the
system survivability given possible attacks. Our model is
the first of its kind to formally represent and simulate
attacks on RFID systems and to quantitatively measure
the degree of survivability of an RFID system under
those attacks.
30. Persuasive Cued This paper presents an integrated evaluation of the 2012
Click-Points Design, Persuasive Cued Click-Points graphical password
Implementation, and scheme, including usability and security evaluations, and
Evaluation of a implementation considerations. An important usability
Knowledge- Based goal for knowledge-based authentication systems is to
Authentication support users in selecting passwords of higher security,
Mechanism in the sense of being from an expanded effective security
space. We use persuasion to influence user choice in
click-based graphical passwords, encouraging users to
select more random, and hence more difficult to guess,
click-points.
26. 31. Resilient Modern computer systems are built on a foundation of 2012
Authenticated software components from a variety of vendors. While
Execution of Critical critical applications may undergo extensive testing and
Applications in evaluation procedures, the heterogeneity of software
Untrusted sources threatens the integrity of the execution
Environments environment for these trusted programs. For instance, if
an attacker can combine an application exploit with a
privilege escalation vulnerability, the operating system
(OS) can become corrupted. Alternatively, a malicious or
faulty device driver running with kernel privileges could
threaten the application. While the importance of
ensuring application integrity has been studied in prior
work, proposed solutions immediately terminate the
application once corruption is detected. Although, this
approach is sufficient for some cases, it is undesirable for
many critical applications. In order to overcome this
shortcoming, we have explored techniques for leveraging
a trusted virtual machine monitor (VMM) to observe the
application and potentially repair damage that occurs. In
this paper, we describe our system design, which
leverages efficient coding and authentication schemes,
and we present the details of our prototype
implementation to quantify the overhead of our
approach. Our work shows that it is feasible to build a
resilient execution environment, even in the presence of a
corrupted OS kernel, with a reasonable amount of storage
and performance overhead.
TECHNOLOGY : JAVA
DOMAIN : IEEE TRANSACTIONS ON DATA MINING
S.NO TITLES ABSTRACT YEAR
1. A Survival Modeling In this paper, we propose a survival modeling approach 2012
Approach to
to promoting ranking diversity for biomedical
Biomedical Searchinformation retrieval. The proposed approach concerns
Result Diversification
with finding relevant documents that can deliver more
Using Wikipedia different aspects of a query. First, two probabilistic
models derived from the survival analysis theory are
proposed for measuring aspect novelty.
2. A Fuzzy Approach for In this paper, we propose a new fuzzy clustering 2012
Multitype Relational approach for multitype relational data (FC-MR). In FC-
Data Clustering MR, different types of objects are clustered
simultaneously. An object is assigned a large
27. membership with respect to a cluster if its related objects
in this cluster have high rankings.
3. Anonimos: An LP- We present Anonimos, a Linear Programming-based 2012
Based Approach for technique for anonymization of edge weights that
Anonymizing preserves linear properties of graphs. Such properties
Weighted Social form the foundation of many important graph-theoretic
Network Graphs algorithms such as shortest paths problem, k-nearest
neighbors, minimum cost spanning Tree and maximizing
information spread.
4. A Methodology for In this paper, we tackle discrimination prevention in data 2012
Direct and Indirect mining and propose new techniques applicable for direct
Discrimination or indirect Discrimination prevention individually or
Prevention in Data both at the same time. We discuss how to clean training
Mining datasets and outsourced datasets in such a way that direct
and/or indirect discriminatory decision rules are
converted to legitimate (non-discriminatory)
Classification rules.
5. Mining Web Graphs In this paper, aiming at providing a general framework 2012
for Recommendations on mining Web graphs for recommendations, (1) we first
propose a novel diffusion method which propagates
similarities between different nodes and generates
recommendations; (2) then we illustrate how to
generalize different recommendation problems into our
graph diffusion framework.
6. Prediction of User’s Predicting user's behavior while serving the Internet can 2012
Web-Browsing be applied effectively in various critical applications.
Behavior: Application Such application has traditional tradeoffs between
of Markov Model modeling complexity and prediction accuracy. In this
paper, we analyze and study Markov model and all- Kth
Markov model in Web prediction. We propose a new
modified Markov model to alleviate the issue of
scalability in the number of paths.
7. Prototype Selection This paper provides a survey of the prototype selection 2012
for Nearest Neighbor methods proposed in the literature from a theoretical and
Classification: empirical point of view. Considering a theoretical point
Taxonomy and of view, we propose a taxonomy based on the main
Empirical Study characteristics presented in prototype selection and we
analyze their advantages and drawbacks, the nearest
neighbor classifier suffers from several drawbacks such
as high storage requirements, low efficiency in
classification response, and low noise Tolerance.
8. Query Planning for We present a low-cost, scalable technique to answer 2012
Continuous continuous aggregation queries using a network of
Aggregation Queries aggregators of dynamic data items. In such a network of
over a Network of data aggregators, each data aggregator serves a set of
Data Aggregators data items at specific coherencies.
28. 9. Revealing Density- In this paper, we introduce a novel density-based 2012
Based Clustering network clustering method, called gSkeletonClu (graph-
Structure from the skeleton based clustering). By projecting an undirected
Core-Connected Tree network to its core-connected maximal spanning tree, the
of a Network clustering problem can be converted to detect core
connectivity components on the tree.
10. Scalable Learning of This study of collective behavior is to understand how 2012
Collective Behavior individuals behave in a social networking environment.
Oceans of data generated by social media like Facebook,
Twitter, Flickr, and YouTube present opportunities and
challenges to study collective behavior on a large scale.
In this work, we aim to learn to predict collective
behavior in social media
11. Weakly Supervised This paper proposes a novel probabilistic modeling 2012
Joint Sentiment-Topic framework called joint sentiment-topic (JST) model
Detection from Text based on latent Dirichlet allocation (LDA), which detects
sentiment and topic simultaneously from text. A
reparameterized version of the JST model called
Reverse-JST, obtained by reversing the sequence of
sentiment and topic generation in the modeling process,
is also studied.
12. A Framework for Due to a wide range of potential applications, research on 2012
Personal Mobile mobile commerce has received a lot of interests from
Commerce Pattern both of the industry and academia. Among them, one of
Mining and Prediction the active topic areas is the mining and prediction of
users’ mobile commerce behaviors such as their
movements and purchase transactions. In this paper, we
propose a novel framework, called Mobile Commerce
Explorer (MCE), for mining and prediction of mobile
users’ movements and purchase transactions under the
context of mobile commerce. The MCE framework
consists of three major components: 1) Similarity
Inference Model (SIM) for measuring the similarities
among stores and items, which are two basic mobile
commerce entities considered in this paper; 2) Personal
Mobile Commerce Pattern Mine (PMCP-Mine)
algorithm for efficient discovery of mobile users’
Personal Mobile Commerce Patterns (PMCPs); and 3)
Mobile Commerce Behavior Predictor (MCBP) for
prediction of possible mobile user behaviors. To our best
knowledge, this is the first work that facilitates mining
and prediction of mobile users’ commerce behaviors in
order to recommend stores and items previously
unknown to a user. We perform an extensive
experimental evaluation by simulation and show that our
proposals produce excellent results.
29. 13. Efficient Extended Extended Boolean retrieval (EBR) models were proposed 2012
Boolean Retrieval nearly three decades ago, but have had little practical
impact, despite their significant advantages compared to
either ranked keyword or pure Boolean retrieval. In
particular,EBR models produce meaningful rankings;
their query model allows the representation of complex
concepts in an and-or format; and they are scrutable, in
that the score assigned to a document depends solely on
the content of that document, unaffected by any
collection statistics or other external factors. These
characteristics make EBR models attractive in domains
typified by medical and legal searching, where the
emphasis is on iterative development of reproducible
complex queries of dozens or even hundreds of terms.
However, EBR is much more computationally expensive
than the alternatives. We consider the implementation of
the p-norm approach to EBR, and demonstrate that ideas
used in the max-score and wand exact optimization
techniques for ranked keyword retrieval can be adaptedto
allow selective bypass of documents via a low-cost
screening process for this and similar retrieval models.
We also propose term independent bounds that are able
to further reduce the number of score calculations for
short, simple queries under the extended Boolean
retrieval model. Together, these methods yield an overall
saving from 50 to 80percent of the evaluation cost on test
queries drawn from biomedical search.
14. Improving Aggregate Recommender systems are becoming increasingly
Recommendation important to individual users and businesses for
Diversity Using providingpersonalized recommendations. However,
Ranking-Based while the majority of algorithms proposed in
Techniques recommender systems literature have focused on
improving recommendation accuracy (as exemplified by
the recent Netflix Prize competition) , other important
aspects of recommendation quality, such as the diversity
of recommendations, have often been overlooked. In this
paper, we introduce and explore a number of item
ranking techniques that can generate recommendations
that have substantially higher aggregate diversity across
all users while maintaining comparable levels of
recommendation accuracy. Comprehensive empirical
evaluation consistently shows the diversity gains of the
30. proposed techniques using several real-world rating
datasets and different rating prediction algorithms.
15. BibPro: A Citation Dramatic increase in the number of academic 2012
Parser Based on publications has led to growing demand for efficient
Sequence Alignment organization of the resources to meet researchers' needs.
As a result, a number of network services have compiled
databases from the public resources scattered over the
Internet. However, publications by different conferences
and journals adopt different citation styles. It is an
interesting problem to accurately extract metadata from a
citation string which is formatted in one of thousands of
different styles. It has attracted a great deal of attention in
research in recent years. In this paper, based on the
notion of sequence alignment, we present a citation
parser called BibPro that extracts components of a
citation string. To demonstrate the efficacy of BibPro, we
conducted experiments on three benchmark data sets.
The results show that BibPro achieved over 90 percent
accuracy on each benchmark. Even with citations and
associated metadata retrieved from the web as training
data, our experiments show that BibPro still achieves a
reasonable performance
16. Extending Attribute Data quantity is the main issue in the small data set 2012
Information for Small problem, because usually insufficient data will not lead
Data Set Classification to a robust classification performance. How to extract
more effective information from a small data set is thus
of considerable interest. This paper proposes a new
attribute construction approach which converts the
original data attributes into a higher dimensional feature
space to extract more attribute information by a
similarity-based algorithm using the classification-
oriented fuzzy membership function. Seven data sets
with different attribute sizes are employed to examine the
performance of the proposed method. The results show
that the proposed method has a superior classification
performance when compared to principal component
analysis (PCA), kernel principal component analysis
(KPCA), and kernel independent component analysis
(KICA) with a Gaussian kernel in the support vector
machine (SVM) classifier
17. Horizontal Preparing a data set for analysis is generally the most 2012
Aggregations in SQL time consuming task in a data mining project, requiring
to Prepare Data Sets many complex SQL queries, joining tables, and