This document presents and compares three approximation methods for thin plate spline mappings that reduce the computational complexity from O(p3) to O(m3), where m is a small subset of points p. Method 1 uses only the subset of points to estimate the mapping. Method 2 uses the subset of basis functions with all target values. Method 3 approximates the full matrix using the Nyström method. Experiments on synthetic grids show Method 3 has the lowest error, followed by Method 2, with Method 1 having the highest error. The three methods trade off accuracy, computation time, and the ability to do principal warp analysis.
Investigation on the Pattern Synthesis of Subarray Weights for Low EMI Applic...IOSRJECE
In modern radar applications, it is frequently required to produce sum and difference patterns sequentially. The sum pattern amplitude coefficients are obtained by using Dolph-Chebyshev synthesis method where as the difference pattern excitation coefficients will be optimized in this present work. For this purpose optimal group weights will be introduced to the different array elements to obtain any type of beam depending on the application. Optimization of excitation to the array elements is the main objective so in this process a subarray configuration is adopted. However, Differential Evolution Algorithm is applied for optimization method. The proposed method is reliable and accurate. It is superior to other methods in terms of convergence speed and robustness. Numerical and simulation results are presented.
I am Irene M. I am a Diffusion Assignment Expert at statisticsassignmenthelp.com. I hold a Masters in Statistics from California, USA.
I have been helping students with their homework for the past 8 years. I solve assignments related to Diffusion. Visit statisticsassignmenthelp.com or email info@statisticsassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with Diffusion Assignments.
This document discusses singular value decomposition (SVD) and its applications. SVD decomposes a matrix into three component matrices that reveal useful properties about the matrix's structure and rank. SVD can be used to find the best-fitting line to a set of points by minimizing the sum of squared distances between points and the line. The solution involves computing the SVD of a transformed matrix and taking the right singular vector corresponding to the second largest singular value.
This document discusses modifications to B-spline interpolation formulations. It begins by describing B-splines and their use in interpolation. It notes that constant, linear, quadratic, and cubic interpolation correspond to B-spline orders of 1, 2, 3, and 4, respectively. It then presents a modified relation where a shifted B-spline is used, with the shift determined by the interpolation order. For example, with linear interpolation the shift is 1. Graphs demonstrate how this modified relation gives nearer points higher weight. The document concludes by discussing constant interpolation in 2D images.
11.quadrature radon transform for smoother tomographic reconstructionAlexander Decker
This document discusses a technique called quadrature Radon transform for tomographic reconstruction. The quadrature Radon transform uses projections from two angles (θ and θ+π/2) rather than just one angle as in conventional Radon transform. This provides additional information that can yield smoother reconstructions. Two approaches are proposed - treating the two sets of projections as real and imaginary parts of a complex number, or averaging the individual back projections. Experimental results show the quadrature Radon transform produces numerically and visually better reconstructions compared to using projections from a single angle.
11.[23 36]quadrature radon transform for smoother tomographic reconstructionAlexander Decker
This document discusses a technique called quadrature Radon transform for tomographic reconstruction. The quadrature Radon transform uses projections from two angles (θ and θ+π/2) rather than just one angle as in conventional Radon transform. This provides additional information that can yield smoother reconstructions. Two approaches are proposed: 1) treating the two sets of projections as real and imaginary parts of a complex number, or 2) averaging the individual back projections. Experimental results show the quadrature Radon transform produces numerically and visually better reconstructions compared to using a single set of projections.
This document analyzes Bernstein's proposed circuit-based approach for the matrix step of the number field sieve integer factorization method. It finds that Bernstein overestimated the improvement in factoring larger integers, which would be a factor of 1.17 larger rather than 3.01 as claimed. The document also proposes an improved circuit design based on a new mesh routing algorithm. It estimates that for 1024-bit RSA, the matrix step could be completed in a day using a few thousand dollars of custom hardware, but that the relation collection step still determines the practical security of RSA.
A Quest for Subexponential Time Parameterized Algorithms for Planar-k-Path: F...cseiitgn
The document summarizes a talk on obtaining subexponential time algorithms for NP-hard problems on planar graphs. It discusses using treewidth and tree decompositions to solve problems like 3-coloring in 2O(√n) time on n-vertex planar graphs. It also discusses the exponential time hypothesis and how it implies lower bounds, showing these algorithms are optimal up to constant factors in the exponent. The document outlines several chapters, including using grid minors and bidimensionality to obtain 2O(√k) algorithms for problems like k-path, even for some W[1]-hard problems parameterized by k.
A generalized class of normalized distance functions called Q-Metrics is described in this presentation. The Q-Metrics approach relies on a unique functional, using a single bounded parameter (Lambda), which characterizes the conventional distance functions in a normalized per-unit metric space. In addition to this coverage property, a distinguishing and extremely attractive characteristic of the Q-Metric function is its low computational complexity. Q-Metrics satisfy the standard metric axioms. Novel networks for classification and regression tasks are defined and constructed using Q-Metrics. These new networks are shown to outperform conventional feed forward back propagation networks with the same size when tested on real data sets.
The document describes a data structure called a Compact Dynamic Rewritable Array (CDRW) that compactly stores arrays where each entry can be dynamically rewritten. It supports creating an array of size N where each entry is initially 0 bits, setting an entry to a value of at most k bits, and getting an entry's value. The goal is to use close to the minimum possible space of the sum of each entry's length while supporting these operations in O(1) time. The document presents solutions using compact hashing that achieve O(1) time for get and set using (1+) times the minimum space plus O(N) bits, for any constant >0. Experimental results show these perform well in terms
Some fixed point theorems of expansion mapping in g-metric spacesinventionjournals
Over the past two decades the development of fixed point theory in metric spaces has attracted
considerable attention due to numerous applications in areas such as variation and linear inequalities,
optimization and approximation theory. Therefore, different Authors proved many fixed points results for self
mapping defined on complete G-Metric space. The objectives of this study are to prove fixed point results for
mapping satisfying expansion conditions.
This document proposes a data compression technique using double transforms. It compares using the Walsh-Hadamard Transform (WHT), Discrete Cosine Transform (DCT), and a proposed Walsh-Cosine double transform on several contour data sets. The double transform approach applies the WHT, selects some spectral components, converts the spectrum to the DCT domain, and applies the inverse DCT. Experimental results show the double transform reduces multiplication operations compared to DCT alone while maintaining reconstruction error.
1) The document discusses the challenges in computing the exact CMBR likelihood function due to the large size of the covariance matrix. Approximations like the Gaussian likelihood are generally used.
2) It describes how the Gaussian likelihood is derived under the assumptions of Gaussianity and statistical isotropy of the alm coefficients. The power spectrum estimators contain all the relevant information about the posterior.
3) Maximum likelihood estimation and the Fisher information matrix are discussed as ways to iteratively find the best fit parameters through quadratic estimators without computing the full likelihood at each step.
This document discusses continuity and one-sided limits of functions. It defines continuity at a point and on an open interval. Functions can have removable or nonremovable discontinuities. One-sided limits are used to investigate continuity on closed intervals. The intermediate value theorem guarantees that a continuous function takes on all intermediate values between its values at the endpoints of a closed interval. Examples demonstrate determining continuity and applying properties of continuity and the intermediate value theorem.
The Inverse Scattering Series (ISS) is a direct inversion method
for a multidimensional acoustic, elastic and anelastic earth. It
communicates that all inversion processing goals are able to
be achieved directly and without any subsurface information.
This task is reached through a task-specific subseries of the
ISS. Using primaries in the data as subevents of the first-order
internal multiples, the leading-order attenuator can predict the
time of all the first-order internal multiples and is able to attenuate
them.
However, the ISS internal multiple attenuation algorithm can
be a computationally demanding method specially in a complex
earth. By using an approach that is based on two angular
quantities and that was proposed in Terenghi et al. (2012), the
cost of the algorithm can be controlled. The idea is to use the
two angles as key-control parameters, by limiting their variation,
to disregard some calculated contributions of the algorithm
that are negligible. Moreover, the range of integration
can be chosen as a compromise of the required degree of accuracy
and the computational time saving.
This time-saving approach is presented
This document contains a summary of a lecture on graph analytics and complexity by Dr. Animesh Chaturvedi. It includes questions and answers on graph algorithms like minimum spanning tree (MST), single-source shortest path (SSSP) problems, and the Agrawal–Kayal–Saxena primality test. Sample algorithms are provided to calculate the average MST and average SSP of multiple graphs by combining the graphs and running standard algorithms. The document is in English and other languages with thank you messages at the end.
This document presents a novel algebraic method called multidimensional rank reduction estimator (MD RARE) for simultaneously estimating parameters of a parametric MIMO channel model from channel sounder measurements. The MD RARE algorithm estimates propagation delay, direction of arrival, and other parameters sequentially by exploiting the Vandermonde structure of the data model. This reduces dimensionality and complexity compared to existing methods. The performance of MD RARE is illustrated using simulated and measured data from a vector channel sounder.
Stratified Monte Carlo and bootstrapping for approximate Bayesian computationUmberto Picchini
Presented on 7 May 2020 at "One World Approximate Bayesian Computation (ABC) Seminar". A video is available at https://youtu.be/IOPnRfAJ_W8
Approximate Bayesian computation (ABC) is computationally intensive for complex model simulators. To exploit expensive simulations, data-resampling via bootstrapping was used with success in [1] to obtain many artificial datasets at little cost and construct a synthetic likelihood. When using the same approach within ABC to produce a pseudo-marginal ABC-MCMC algorithm, the posterior variance is inflated, thus producing biased posterior inference. Here we use stratified Monte Carlo to considerably reduce the bias induced by data resampling. We also show that it is possible to obtain reliable inference using a larger than usual ABC threshold, by employing stratified Monte Carlo. Finally, we show that with stratified sampling we obtain a less variable ABC likelihood. In our paper [2] we consider simulation studies for static (Gaussian, g-and-k distribution, Ising model) and dynamic models (Lotka-Volterra). For the Lotka-Volterra case study, we compare our results against a standard pseudo-Marginal ABC and find that our approach is four times more efficient and, given limited computational budget, it explores the posterior surface more thoroughly. A comparison against state-of-art sequential Monte Carlo ABC is also reported.
References
[1] R. G. Everitt (2017). Bootstrapped synthetic likelihood. arXiv:1711.05825.
[2] U. Picchini, R.G. Everitt (2019). Stratified sampling and resampling for approximateBayesian computation. arXiv:1905.07976
This document summarizes a talk titled "AWager for 2016: How SoftwareWill Beat Hardware in Biological Data Analysis". The talk discusses how software approaches can outpace hardware for analyzing large biological datasets. It notes that current variant calling approaches have limitations due to being I/O intensive and requiring multiple passes over data. The talk introduces approaches using lossy compression and streaming algorithms that can perform analysis more efficiently using less memory and in a single pass. This could enable analyzing a human genome on a desktop computer by 2016 as wagered. The talk argues that with better algorithmic tools, biological data analysis need not require large computers and can scale with the information content of data rather than just data size.
Social media has become a terrific markeitng tool, but many companies have trouble getting started. Here are some of the steps I recommend to clients, starting with the blog.
This is a paper that was developed as a collaboration with Bill Milligan at Level 3 in the late 90's. We extended the OFA concept of Cary Millsap to the 11.0.3 Apps.
The document discusses strategies for working with large biological datasets as sequencing costs decrease and data volumes increase exponentially. It summarizes three key uses for abundant sequencing data: hypothesis falsification, model comparison, and hypothesis generation. The author's lab aims to develop open tools for moving quickly from raw data to hypotheses and identify challenges preventing collaborators from doing their science. Summarizing a discussion on soil microbial communities, it notes the immense diversity and challenges of culture-dependent approaches, necessitating single-cell sequencing and metagenomics.
Presentatie Cloud computing en de gevolgen voor creditmanagement. Presentatie door Piet van Vugt tijdens de Credit Expo 14 nov 2012. Presentatie aangeboden door credit software specialist OnGuard.
Electricity is a form of energy carried by the movement of electrons between atoms. Atoms are composed of subatomic particles including electrons, which carry a negative charge, and protons, which carry a positive charge in the nucleus. An electric current is created when an outside force causes electrons to break free from one atom and flow to the next in a conductor.
ERIC is a database sponsored by the U.S. Department of Education that contains education-related journal articles and documents. It was established in 1966. ERIC documents include papers from conferences, reports from school districts and state education agencies, and doctoral dissertations. Searches can be limited to peer-reviewed or full text materials. Controlled vocabulary terms called descriptors help users search consistently on topics. Boolean and truncation searches can combine or broaden terms. While some full text is available, other items can be requested or accessed at partner libraries.
Social Inc. en IAB Nederland presenteren het
e-book ‘Social Business Now’. Dit e-book is bedoeld voor zowel leidinggevenden als voor mensen op de werkvloer - van communicatiemedewerker en marketeer tot overig personeel - die geïnspireerd willen raken door en kennis willen opdoen over de kansen van social business.
Shepley ross introduction_od_es_manual_4thgabo GAG
This document describes a new type of battery that is safer and longer-lasting than current lithium-ion batteries. It works by using sodium ions rather than lithium ions and two different solid materials for the anode and cathode. Sodium-ion batteries could provide cities and homes with safe, affordable and sustainable energy storage to power electronics and store energy from solar and wind power.
The document describes the process for developing, testing, and deploying a new SOA service across different environments. The process involves designing the component, testing it, approving it, and then promoting it to production. Key steps include deploying to test, updating endpoints, synchronizing registries and containers, and finally managing endpoints in production. Monitoring is provided by the EM SOA Management Pack to track metrics and instances across the environments.
1. D763 provides modular software systems and solutions through component-based construction, managing and updating components.
2. The company focuses on mobile devices, modularization, and integration platforms, developing applications for Android and Java platforms.
3. D763 emphasizes simplifying software construction through modular and dynamic applications built from reusable components that work together.
This document summarizes several issues affecting the HR profession in Ohio, including workers' compensation, unemployment compensation, employment/labor law, minimum wage, and health care. For workers' compensation, it discusses the surplus in the state insurance fund and recent employer cost savings. For unemployment compensation, it outlines the status of state trust fund solvency nationally and in Ohio. It also reviews potential legislation related to various employment and labor law topics, minimum wage rates in Ohio compared to the federal rate, and recent health care legislation in the state.
The document outlines the enrollment and survey tools used for a mentoring program. It describes collecting student details and parent consent at enrollment. A pre-survey is used to gather baseline data on students' supportive relationships, school engagement, attitudes, and coping strategies. Example questions are provided. Post-surveys of mentees, mentors, and parents evaluate the program and identify any improvements.
Titus Brown discusses how to build an enduring online research presence using social networking and open science. He outlines several social media sites like blogs, Twitter, Figshare and Github that researchers can use. Brown emphasizes defining goals, addressing concerns about time, content maintenance and pushback. He provides suggestions for low effort starts like creating online profiles and following others. Overall, Brown advocates integrating these tools into one's routine to enhance networking, discovery and career opportunities.
The document discusses using data analysis to create systemic change in schools through a single plan for student achievement. It recommends identifying performance gaps and their systemic causes, then crafting systemic solutions like changing schedules or textbook purchases. Schools should create feedback systems for staff and professional development to monitor progress, making adjustments as needed. The goal is to increase awareness of challenges, engage stakeholders, and create lasting improvements in student achievement through strategic, evolutionary processes that build teacher capacity and shed ineffective practices. An example given is transforming school culture from one of low feedback and isolation to a highly collaborative one focused on motivating instructional strategies known to improve outcomes.
This document presents results from a lattice QCD calculation of the proton isovector scalar charge (gs) at two light quark masses. The calculation uses domain-wall fermions and Iwasaki gauge actions on a 323x64 lattice with a spacing of 0.144 fm. Ratios of three-point to two-point correlation functions are formed and fit to a plateau to extract gs. Values of gs are obtained for quark masses of 0.0042 and 0.001, and all-mode averaging is used for the lighter mass. Chiral perturbation theory will be used to extrapolate gs to the physical quark mass. Preliminary results for gs at the unphysical quark masses are reported in lattice units.
This document summarizes finite difference modeling methods used at M-OSRP. It discusses:
1) The second order time and fourth order space finite difference schemes used to model acoustic wave propagation.
2) How boundary conditions like Dirichlet/Neumann generate strong spurious reflections that can mask true events.
3) The importance of accurate source fields for modeling - better source fields lead to more accurate linear inversions and the ability to observe phenomena like polarity reversals in modeled data.
Using several mathematical examples from three different authors in texts from different courses this paper illustrates the easier way to avoid confusions and always get the correct results with the least effort was to use the proposed Excel Gamma function explained in detail for the proper use of the Q(z) and ercf(x) functions in most communication courses. The paper serves as a tutorial and introduction for such functions
This document discusses algorithms for computing the Kolmogorov-Smirnov distribution, which is used to measure goodness of fit between empirical and theoretical distributions. It describes existing algorithms that are either fast but unreliable or precise but slow. The authors propose a new algorithm that uses different approximations depending on sample size and test statistic value, to provide fast and reliable computation of both the distribution and its complement. Their C program implements this approach using multiple methods, including an exact but slow recursion formula and faster but less precise approximations for large samples.
Problem Solving by Computer Finite Element MethodPeter Herbert
This document discusses using finite element methods and the cotangent Laplacian to solve partial differential equations numerically. It begins by explaining how to generate simplicial meshes by dividing a region into basic pieces. It then introduces the cotangent Laplacian, which approximates the Laplacian operator, and how it is calculated based on angles in triangles. Finally, it demonstrates applying the cotangent Laplacian to solve sample Dirichlet and Neumann boundary value problems and compares the approximate solutions to exact solutions, showing convergence as the mesh is refined.
This document summarizes an investigation of using a dual tree algorithm and space partitioning trees to approximate matrix multiplication more efficiently than the naive O(MDN) approach under certain conditions. It presents an algorithm that organizes the row vectors of the left matrix and column vectors of the right matrix into ball trees, then performs a dual tree comparison to estimate the product matrix entries. For this to provide better complexity than naive multiplication, the vectors must fall into clusters proportional to D^τ for some τ > 0. However, uniformly distributed vectors would result in exponentially small expected cluster sizes, limiting the practical applicability of this approach. Future work is needed to address this issue.
A Parallel Branch And Bound Algorithm For The Quadratic Assignment ProblemMary Calkins
This document summarizes a parallel branch and bound algorithm for solving the quadratic assignment problem (QAP). Key points:
- The algorithm was implemented on a Cray X-MP asynchronous shared-memory multiprocessor.
- For problems of size n=10, the algorithm achieved near-linear speedup of around n using n processors. Good results were also obtained for a classic QAP problem of size n=12.
- The algorithm uses a "polytomic" branching rule to generate multiple successors at each node, constraining subproblems and allowing minimal information to be stored per node.
This summary provides the key details from the document in 3 sentences:
The document presents a new iterative method (M2 method) for determining the exact solution to a parametric linear programming problem where the objective function and constraints contain parameters. The M2 method exploits the concept of a p-solution to a square linear interval parametric system and iteratively reduces the parameter domain while maintaining upper and lower bounds on the optimal objective value. A numerical example is given to illustrate the new iterative approach for solving parametric linear programming problems.
This document discusses using isogeometric analysis to solve partial differential equations (PDEs) on lower dimensional manifolds, specifically surfaces. It introduces representing surfaces using non-uniform rational B-splines (NURBS) parametrization and mapping the surface to physical space. It proposes using the same NURBS basis functions for spatial discretization in isogeometric analysis to exactly represent the surface geometry. The document outlines error estimates for isogeometric analysis of second order PDEs on surfaces and highlights the accuracy and efficiency benefits of exact surface representation. Several examples of PDEs on surfaces, like the Laplace-Beltrami problem, are solved to demonstrate isogeometric analysis.
This document provides solutions to problems marked with a star in the second edition of the textbook "Digital Image Processing" by Rafael C. Gonzalez and Richard E. Woods. The solutions cover topics from several chapters of the textbook, including image formation, image transforms, histogram processing, and spatial filtering. The problems address concepts such as image sampling, image compression, histogram equalization, and linear spatial filtering. Detailed explanations and illustrations are provided for each problem solution.
This document contains solutions to problems marked with a star in the second edition of the textbook "Digital Image Processing" by Rafael C. Gonzalez and Richard E. Woods. The solutions are provided for students and can also be downloaded from the book's website. The document includes introductory information, the solutions themselves which involve figures and mathematical expressions, and references back to chapters and equations from the textbook.
This document summarizes an academic paper that proposes modifying well-known local linear models for system identification by replacing their original recursive learning rules with outlier-robust variants based on M-estimation. It describes three existing local linear models - local linear map (LLM), radial basis function network (RBFN), and local model network (LMN) - and then introduces the concept of M-estimation as a way to make the learning rules of these models more robust to outliers. The performance of the proposed outlier-robust variants is evaluated on three benchmark datasets and is found to provide considerable improvement in the presence of outliers compared to the original models.
This document discusses methods for summarizing Lego-like sphere and torus maps. It begins by introducing the concept of ({a,b},k)-maps, which are k-valent maps with faces of size a or b. It then discusses several challenges in enumerating and drawing such maps, including enumerating all possible Lego decompositions. Specific enumeration methods are described, such as using exact covering problems or satisfiability problems. The document also discusses challenges in graph drawing representations, and suggests using primal-dual circle packings as a promising approach.
27-Equivalent under modulo-27-Oct-2020Material_I_27-Oct-2020_Cryptography_.pdfMohamedshabana38
This document discusses Hill ciphers, a type of polygraphic cipher system for encrypting messages. Hill ciphers operate by:
1) Choosing an n×n matrix with integer entries to use for encryption.
2) Grouping the plaintext into blocks of n letters and representing them as column vectors.
3) Multiplying each plaintext vector by the encryption matrix to produce the ciphertext vector.
4) Converting the ciphertext vectors back to letters using a substitution cipher.
Modular arithmetic is also introduced, which allows encrypting vectors to integers that can be represented as letters.
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...SSA KPI
The document describes efficient solution methods for two-stage stochastic linear programs (SLPs) using interior point methods. Interior point methods require solving large, dense systems of linear equations at each iteration, which can be computationally difficult for SLPs due to their structure leading to dense matrices. The paper reviews methods for improving computational efficiency, including reformulating the problem, exploiting special structures like transpose products, and explicitly factorizing the matrices to solve smaller independent systems in parallel. Computational results show explicit factorizations generally require the least effort.
This document summarizes several papers on principal component analysis (PCA) with network/graph constraints. It discusses graph-Laplacian PCA (gLPCA) which adds a graph smoothness regularization term to standard PCA. It also covers robust graph-Laplacian PCA (RgLPCA) which uses an L2,1 norm and iterative algorithms. Further, it summarizes robust PCA on graphs which learns the product of principal directions and components while assuming smoothness on this product. Finally, it discusses manifold regularized matrix factorization (MMF) which imposes orthonormal constraints on principal directions.
This presentation is a part of Computer Oriented Numerical Method . Newton-Cotes formulas are an extremely useful and straightforward family of numerical integration techniques.
Parallel Evaluation of Multi-Semi-JoinsJonny Daenen
Presentation given on VLDB 2016: 42nd International Conference on Very Large Data Bases.
Paper: http://dx.doi.org/10.14778/2977797.2977800
ArXiv: https://arxiv.org/abs/1605.05219
Poster: https://zenodo.org/record/61653 (doi 10.5281/zenodo.61653)
Gumbo Software: https://github.com/JonnyDaenen/Gumbo
Abstract
While services such as Amazon AWS make computing power abundantly available, adding more computing nodes can incur high costs in, for instance, pay-as-you-go plans while not always significantly improving the net running time (aka wall-clock time) of queries. In this work, we provide algorithms for parallel evaluation of SGF queries in MapReduce that optimize total time, while retaining low net time. Not only can SGF queries specify all semi-join reducers, but also more expressive queries involving disjunction and negation. Since SGF queries can be seen as Boolean combinations of (potentially nested) semi-joins, we introduce a novel multi-semi-join (MSJ) MapReduce operator that enables the evaluation of a set of semi-joins in one job. We use this operator to obtain parallel query plans for SGF queries that outvalue sequential plans w.r.t. net time and provide additional optimizations aimed at minimizing total time without severely affecting net time. Even though the latter optimizations are NP-hard, we present effective greedy algorithms. Our experiments, conducted using our own implementation Gumbo on top of Hadoop, confirm the usefulness of parallel query plans, and the effectiveness and scalability of our optimizations, all with a significant improvement over Pig and Hive.
This document presents a novel Copula based approach to generate critical sea states given a target reliability index based on the return period of the extreme event. Copula based approach is much more flexible and powerful when compared to conventional approaches using linear correlation coefficient.
This document discusses nonparametric density estimation techniques. It begins by noting limitations of histograms for estimating densities and then introduces the kernel density estimator. The kernel density estimator smooths the empirical density by placing a kernel (such as the normal density) over each data point. The smoothing parameter determines the width of the kernels and impacts the tradeoff between bias and variance. Optimal choices minimize the integrated mean squared error. Kernel density estimators can be used to estimate multivariate densities and hazard rates without parametric assumptions about the underlying distributions. Nonparametric regression similarly estimates relationships without specifying a functional form.
Similar to Approximate Thin Plate Spline Mappings (20)
Smart mobility refers to the integration of advanced technologies and innovative solutions to create efficient, sustainable, and interconnected transportation systems. It encompasses various aspects of transportation, including public transit, shared mobility services, intelligent transportation systems, electric vehicles, and connected infrastructure. Smart mobility aims to improve the overall mobility experience by leveraging data, connectivity, and automation to enhance safety, reduce congestion, optimize transportation networks, and minimize environmental impacts.
kk vathada _digital transformation frameworks_2024.pdfKIRAN KV
I'm excited to share my latest presentation on digital transformation frameworks from industry leaders like PwC, Cognizant, Gartner, McKinsey, Capgemini, MIT, and DXO. These frameworks are crucial for driving innovation and success in today's digital age. Whether you're a consultant, director, or head of digital transformation, these insights are tailored to help you lead your organization to new heights.
🔍 Featured Frameworks:
PwC's Framework: Grounded in Industry 4.0 with a focus on data and analytics, and digitizing product and service offerings.
Cognizant's Framework: Enhancing customer experience, incorporating new pricing models, and leveraging customer insights.
Gartner's Framework: Emphasizing shared understanding, leadership, and support teams for digital excellence.
McKinsey's 4D Framework: Discover, Design, Deliver, and De-risk to navigate digital change effectively.
Capgemini's Framework: Focus on customer experience, operational excellence, and business model innovation.
MIT’s Framework: Customer experience, operational processes, business models, digital capabilities, and leadership culture.
DXO's Framework: Business model innovation, digital customer experience, and digital organization & process transformation.
Demystifying Neural Networks And Building Cybersecurity ApplicationsPriyanka Aash
In today's rapidly evolving technological landscape, Artificial Neural Networks (ANNs) have emerged as a cornerstone of artificial intelligence, revolutionizing various fields including cybersecurity. Inspired by the intricacies of the human brain, ANNs have a rich history and a complex structure that enables them to learn and make decisions. This blog aims to unravel the mysteries of neural networks, explore their mathematical foundations, and demonstrate their practical applications, particularly in building robust malware detection systems using Convolutional Neural Networks (CNNs).
Uncharted Together- Navigating AI's New Frontiers in LibrariesBrian Pichman
Journey into the heart of innovation where the collaborative spirit between information professionals, technologists, and researchers illuminates the path forward through AI's uncharted territories. This opening keynote celebrates the unique potential of special libraries to spearhead AI-driven transformations. Join Brian Pichman as we saddle up to ride into the history of Artificial Intelligence, how its evolved over the years, and how its transforming today's frontiers. We will explore a variety of tools and strategies that leverage AI including some new ideas that may enhance cataloging, unlock personalized user experiences, or pioneer new ways to access specialized research. As with any frontier exploration, we will confront shared ethical challenges and explore how joint efforts can not only navigate but also shape AI's impact on equitable access and information integrity in special libraries. For the remainder of the conference, we will equip you with a "digital compass" where you can submit ideas and thoughts of what you've learned in sessions for a final reveal in the closing keynote.
Choosing the Best Outlook OST to PST Converter: Key Features and Considerationswebbyacad software
When looking for a good software utility to convert Outlook OST files to PST format, it is important to find one that is easy to use and has useful features. WebbyAcad OST to PST Converter Tool is a great choice because it is simple to use for anyone, whether you are tech-savvy or not. It can smoothly change your files to PST while keeping all your data safe and secure. Plus, it can handle large amounts of data and convert multiple files at once, which can save you a lot of time. It even comes with 24*7 technical support assistance and a free trial, so you can try it out before making a decision. Whether you need to recover, move, or back up your data, Webbyacad OST to PST Converter is a reliable option that gives you all the support you need to manage your Outlook data effectively.
"Hands-on development experience using wasm Blazor", Furdak Vladyslav.pptxFwdays
I will share my personal experience of full-time development on wasm Blazor
What difficulties our team faced: life hacks with Blazor app routing, whether it is necessary to write JavaScript, which technology stack and architectural patterns we chose
What conclusions we made and what mistakes we committed
Discovery Series - Zero to Hero - Task Mining Session 1DianaGray10
This session is focused on providing you with an introduction to task mining. We will go over different types of task mining and provide you with a real-world demo on each type of task mining in detail.
The History of Embeddings & Multimodal EmbeddingsZilliz
Frank Liu will walk through the history of embeddings and how we got to the cool embedding models used today. He'll end with a demo on how multimodal RAG is used.
Improving Learning Content Efficiency with Reusable Learning ContentEnterprise Knowledge
Enterprise Knowledge’s Emily Crockett, Content Engineering Consultant, presented “Improve Learning Content Efficiency with Reusable Learning Content” at the Learning Ideas conference on June 13th, 2024.
This presentation explored the basics of reusable learning content, including the types of reuse and the key benefits of reuse such as improved content maintenance efficiency, reduced organizational risk, and scalable differentiated instruction & personalization. After this primer on reuse, Crockett laid out the basic steps to start building reusable learning content alongside a real-life example and the technology stack needed to support dynamic content. Key objectives included:
- Be able to explain the difference between reusable learning content and duplicate content
- Explore how a well-designed learning content model can reduce duplicate content and improve your team’s efficiency
- Identify key tasks and steps in creating a learning content model
Intel Unveils Core Ultra 200V Lunar chip .pdfTech Guru
Intel has made a significant breakthrough in the world of processors with the introduction of its Core Ultra 200V mobile processor series, codenamed Lunar Lake. This innovative processor marks a fundamental shift in the way Intel creates processors, with a high degree of aggregation, including memory-on-package (MoP). The Core Ultra 300 MX series is designed to power thin-and-light devices that are capable of handling the latest AI applications, including Microsoft's Copilot+ experiences.
Latest Tech Trends Series 2024 By EY IndiaEYIndia1
Stay ahead of the curve with our comprehensive Tech Trends Series! Explore the latest technology trends shaping the world today, from the 2024 Tech Trends report and top emerging technologies to their impact on business technology trends. This series delves into the most significant technological advancements, giving you insights into both established and emerging tech trends that will revolutionize various industries.
BLOCKCHAIN TECHNOLOGY - Advantages and DisadvantagesSAI KAILASH R
Explore the advantages and disadvantages of blockchain technology in this comprehensive SlideShare presentation. Blockchain, the backbone of cryptocurrencies like Bitcoin, is revolutionizing various industries by offering enhanced security, transparency, and efficiency. However, it also comes with challenges such as scalability issues and energy consumption. This presentation provides an in-depth analysis of the key benefits and drawbacks of blockchain, helping you understand its potential impact on the future of technology and business.
BLOCKCHAIN TECHNOLOGY - Advantages and Disadvantages
Approximate Thin Plate Spline Mappings
1. Approximate Thin Plate Spline Mappings
Gianluca Donato1
and Serge Belongie2
1
Digital Persona, Inc., Redwood City, CA 94063
gianlucad@digitalpersona.com
2
U.C. San Diego, La Jolla, CA 92093-0114
sjb@cs.ucsd.edu
Abstract. The thin plate spline (TPS) is an effective tool for modeling
coordinate transformations that has been applied successfully in several
computer vision applications. Unfortunately the solution requires the in-
version of a p×p matrix, where p is the number of points in the data set,
thus making it impractical for large scale applications. As it turns out,
a surprisingly good approximate solution is often possible using only a
small subset of corresponding points. We begin by discussing the obvious
approach of using the subsampled set to estimate a transformation that is
then applied to all the points, and we show the drawbacks of this method.
We then proceed to borrow a technique from the machine learning com-
munity for function approximation using radial basis functions (RBFs)
and adapt it to the task at hand. Using this method, we demonstrate
a significant improvement over the naive method. One drawback of this
method, however, is that is does not allow for principal warp analysis,
a technique for studying shape deformations introduced by Bookstein
based on the eigenvectors of the p × p bending energy matrix. To ad-
dress this, we describe a third approximation method based on a classic
matrix completion technique that allows for principal warp analysis as
a by-product. By means of experiments on real and synthetic data, we
demonstrate the pros and cons of these different approximations so as
to allow the reader to make an informed decision suited to his or her
application.
1 Introduction
The thin plate spline (TPS) is a commonly used basis function for representing
coordinate mappings from R2
to R2
. Bookstein [3] and Davis et al. [5], for exam-
ple, have studied its application to the problem of modeling changes in biological
forms. The thin plate spline is the 2D generalization of the cubic spline. In its
regularized form the TPS model includes the affine model as a special case.
One drawback of the TPS model is that its solution requires the inversion
of a large, dense matrix of size p × p, where p is the number of points in the
data set. Our goal in this paper is to present and compare three approximation
methods that address this computational problem through the use of a subset of
corresponding points. In doing so, we highlight connections to related approaches
in the area of Gaussian RBF networks that are relevant to the TPS mapping
A. Heyden et al. (Eds.): ECCV 2002, LNCS 2352, pp. 21–31, 2002.
c Springer-Verlag Berlin Heidelberg 2002
2. 22 G. Donato and S. Belongie
problem. Finally, we discuss a novel application of the Nystr¨om approximation
[1] to the TPS mapping problem.
Our experimental results suggest that the present work should be particu-
larly useful in applications such as shape matching and correspondence recovery
(e.g. [2,7,4]) as well as in graphics applications such as morphing.
2 Review of Thin Plate Splines
Let vi denote the target function values at locations (xi, yi) in the plane, with
i = 1, 2, . . . , p. In particular, we will set vi equal to the target coordinates
(xi,yi) in turn to obtain one continuous transformation for each coordinate. We
assume that the locations (xi, yi) are all different and are not collinear. The TPS
interpolant f(x, y) minimizes the bending energy
If =
R2
(f2
xx + 2f2
xy + f2
yy)dxdy
and has the form
f(x, y) = a1 + axx + ayy +
p
i=1
wiU ( (xi, yi) − (x, y) )
where U(r) = r2
log r. In order for f(x, y) to have square integrable second
derivatives, we require that
p
i=1
wi = 0 and
p
i=1
wixi =
p
i=1
wiyi = 0 .
Together with the interpolation conditions, f(xi, yi) = vi, this yields a linear
system for the TPS coefficients:
K P
PT
O
w
a
=
v
o
(1)
where Kij = U( (xi, yi) − (xj, yj) ), the ith row of P is (1, xi, yi), O is a 3 × 3
matrix of zeros, o is a 3 × 1 column vector of zeros, w and v are column vectors
formed from wi and vi, respectively, and a is the column vector with elements
a1, ax, ay. We will denote the (p + 3) × (p + 3) matrix of this system by L; as
discussed e.g. in [7], L is nonsingular. If we denote the upper left p × p block of
L−1
by L−1
p , then it can be shown that
If ∝ vT
L−1
p v = wT
Kw .
3. Approximate Thin Plate Spline Mappings 23
When there is noise in the specified values vi, one may wish to relax the exact
interpolation requirement by means of regularization. This is accomplished by
minimizing
H[f] =
n
i=1
(vi − f(xi, yi))2
+ λIf .
The regularization parameter λ, a positive scalar, controls the amount of smooth-
ing; the limiting case of λ = 0 reduces to exact interpolation. As demonstrated
in [9,6], we can solve for the TPS coefficients in the regularized case by replacing
the matrix K by K + λI, where I is the p × p identity matrix.
3 Approximation Techniques
Since inverting L is an O(p3
) operation, solving for the TPS coefficients can be
very expensive when p is large. We will now discuss three different approximation
methods that reduce this computational burden to O(m3
), where m can be as
small as 0.1p. The corresponding savings factors in memory (5x) and processing
time (1000x) thus make TPS methods tractable when p is very large.
In the discussion below we use the following partition of the K matrix:
K =
A B
BT
C
(2)
with A ∈ Rm×m
, B ∈ Rm×n
, and C ∈ Rn×n
. Without loss of generality, we
will assume the p points are labeled in random order, so that the first m points
represent a randomly selected subset.
3.1 Method 1: Simple Subsampling
The simplest approximation technique is to solve for the TPS mapping between
a randomly selected subset of the correspondences. This amounts to using A
in place of K in Equation (1). We can then use the recovered coefficients to
extrapolate the TPS mapping to the remaining points. The result of applying
this approximation to some sample shapes is shown in Figure 1. In this case,
certain parts were not sampled at all, and as a result the mapping in those areas
is poor.
3.2 Method 2: Basis Function Subset
An improved approximation can be obtained by using a subset of the basis
functions with all of the target values. Such an approach appears in [10,6] and
Section 3.1 of [8] for the case of Gaussian RBFs. In the TPS case, we need to
account for the affine terms, which leads to a modified set of linear equations.
Starting from the cost function
R[ ˜w, a] =
1
2
v − ˜K ˜w − Pa 2
+
λ
2
˜wT
A ˜w ,
4. 24 G. Donato and S. Belongie
(a) (b) (c) (d)
(e) (f) (g) (h)
Fig. 1. Thin plate spline (TPS) mapping example. (a,b) Template and target synthetic
fish shapes, each consisting of 98 points. (Correspondences between the two shapes are
known.) (c) TPS mapping of (a) onto (b) using the subset of points indicated by circles
(Method 1). Corresponding points are indicated by connecting line segments. Notice
the quality of the mapping is poor where the samples are sparse. An improved approxi-
mation can be obtained by making use of the full set of target values; this is illustrated
in (d), where we have used Method 2 (discussed in Section 3.2). A similar mapping is
found for the same set of samples using Method 3 (see Section 3.3). In (e-h) we observe
the same behavior for a pair of handwritten digits, where the correspondences (89 in
all) have been found using the method of [2].
we minimize it by setting ∂R/∂ ˜w and ∂R/∂a to zero, which leads to the following
(m + 3) × (m + 3) linear system,
˜KT ˜K + λA ˜KT
P
PT ˜K PT
P
˜w
a
=
˜KT
v
PT
v
(3)
where ˜KT
= [A BT
], ˜w is an m × 1 vector of TPS coefficients, and the rest
of the entries are as before. Thus we seek weights for the reduced set of basis
functions that take into account the full set of p target values contained in v. If
we call ˜P the first m rows of P and ˜I the first m columns of the p × p identity
matrix, then under the assumption ˜PT
˜w = 0, Equation (3) is equivalent to
˜K + λ˜I P
˜PT
O
˜w
a
=
v
o
which corresponds to the regularized version of Equation (1) when using the
subsampled ˜K and ˜PT
in place of K and PT
.
The application of this technique to the fish and digit shapes is shown in
Figure 1(d,h).
5. Approximate Thin Plate Spline Mappings 25
3.3 Method 3: Matrix Approximation
The essence of Method 2 was to use a subset of exact basis functions to approx-
imate a full set of target values. We now consider an approach that uses a full
set of approximate basis functions to approximate the full set of target values.
The approach is based on a technique known as the Nystr¨om method.
The Nystr¨om method provides a means of approximating the eigenvectors
of K without using C. It was originally developed in the late 1920s for the
numerical solution of eigenfunction problems [1] and was recently used in [11]
for fast approximate Gaussian process regression and in [8] (implicitly) to speed
up several machine learning techniques using Gaussian kernels. Implicit to the
Nystr¨om method is the assumption that C can be approximated by BT
A−1
B,
i.e.
ˆK =
A B
BT
BT
A−1
B
(4)
If rank(K) = m and the m rows of the submatrix [A B] are linearly indepen-
dent, then ˆK = K. In general, the quality of the approximation can be expressed
as the norm of the difference C − BT
A−1
B, the Schur complement of K.
Given the m × m diagonalization A = UΛUT
, we can proceed to find the
approximate eigenvectors of K:
ˆK = ˜UΛ ˜UT
, with ˜U =
U
BT
UΛ−1 (5)
Note that in general the columns of ˜U are not orthogonal. To address this, first
define Z = ˜UΛ1/2
so that ˆK = ZZT
. Let QΣQT
denote the diagonalization
of ZT
Z. Then the matrix V = ZQΣ−1/2
contains the leading orthonormalized
eigenvectors of ˆK, i.e. ˆK = V ΣV T
, with V T
V = I.
From the standard formula for the partitioned inverse of L, we have
L−1
=
K−1
+ K−1
PS−1
PT
K−1
−K−1
PS−1
−S−1
PT
K−1
S−1 , S = −PT
K−1
P
and thus
w
a
= L−1 v
o
=
(I + K−1
PS−1
PT
)K−1
v
−S−1
PT
K−1
v
Using the Nystr¨om approximation to K, we have ˆK−1
= V Σ−1
V T
and
ˆw = (I + V Σ−1
V T
P ˆS−1
PT
)V Σ−1
V T
v ,
ˆa = − ˆS−1
PT
V Σ−1
V T
v
with ˆS = −PT
V Σ−1
V T
P, which is 3 × 3. Therefore, by computing matrix-
vector products in the appropriate order, we can obtain estimates to the TPS
6. 26 G. Donato and S. Belongie
(a) (b) (c)
Fig. 2. Grids used for experimental testing. (a) Reference point set S1: 12 × 12 points
on the interval [0, 128]×[0, 128]. (b,c) Warped point sets S2 and S3 with bending energy
0.3 and 0.8, respectively. To test the quality of the different approximation methods,
we used varying percentages of points to estimate the TPS mapping from S1 to S2 and
from S1 to S3.
coefficients without ever having to invert or store a large p × p matrix. For the
regularized case, one can proceed in the same manner, using
(V ΣV T
+ λI)−1
= V (Σ + λI)−1
V T
.
Finally, the approximate bending energy is given by
wT ˆKw = (V T
w)T
Σ(V T
w)
Note that this bending energy is the average of the energies associated to the x
and y components as in [3].
Let us briefly consider what ˆw represents. The first m components roughly
correspond to the entries in ˜w for Method 2; these in turn correspond to the
columns of ˆK (i.e. ˜K) for which exact information is available. The remaining
entries weight columns of ˆK with (implicitly) filled-in values for all but the
first m entries. In our experiments, we have observed that the latter values of
ˆw are nonzero, which indicates that these approximate basis functions are not
being disregarded. Qualitatively, the approximation quality of methods 2 and 3
are very similar, which is not surprising since they make use of the same basic
information. The pros and cons of these two methods are investigated in the
following section.
4 Experiments
4.1 Synthetic Grid Test
In order to compare the above three approximation methods, we ran a set of
experiments based on warped versions of the cartesian grid shown in Figure 2(a).
The grid consists of 12 × 12 points in a square of dimensions 128 × 128. Call this
set of points S1. Using the technique described in Appendix A, we generated
7. Approximate Thin Plate Spline Mappings 27
5 10 15 20 25 30
0
1
2
3
4
5
6
7
8
Percentage of samples used
MSE
If
=0.3
Method 1
Method 2
Method 3
5 10 15 20 25 30
0
1
2
3
4
5
6
7
8
Percentage of samples used
MSE
If
=0.8
Method 1
Method 2
Method 3
Fig. 3. Comparison of approximation error. Mean squared error in position between
points in the target grid and corresponding points in the approximately warped ref-
erence grid is plotted vs. percentage of randomly selected samples used. Performance
curves for each of the three methods are shown in (a) for If = 0.3 and (b) for If = 0.8.
point sets S2 and S3 by applying random TPS warps with bending energy 0.3
and 0.8, respectively; see Figure 2(b,c). We then studied the quality of each
approximation method by varying the percentage of random samples used to
estimate the (unregularized) mapping of S1 onto S2 and S3, and measuring the
mean squared error (MSE) in the estimated coordinates. The results are plotted
in Figure 3. The error bars indicate one standard deviation over 20 repeated
trials.
4.2 Approximate Principal Warps
In [3] Bookstein develops a multivariate shape analysis framework based on
eigenvectors of the bending energy matrix L−1
p KL−1
p = L−1
p , which he refers to as
principal warps. Interestingly, the first 3 principal warps always have eigenvalue
zero, since any warping of three points in general position (a triangle) can be
represented by an affine transform, for which the bending energy is zero. The
shape and associated eigenvalue of the remaining principal warps lend insight
into the bending energy “cost” of a given mapping in terms of that mapping’s
projection onto the principal warps. Through the Nystr¨om approximation in
Method 3, one can produce approximate principal warps using ˆL−1
p as follows:
ˆL−1
p = ˆK−1
+ ˆK−1
PS−1
PT ˆK−1
= V Σ−1
V T
+ V Σ−1
V T
PS−1
PT
V Σ−1
V T
= V (Σ−1
+ Σ−1
V T
PS−1
PT
V Σ−1
)V T
∆
= V ˆΛV T
8. 28 G. Donato and S. Belongie
Fig. 4. Approximate principal warps for the fish shape. From left to right and top
to bottom, the surfaces are ordered by eigenvalue in increasing order. The first three
principal warps, which represent the affine component of the transformation and have
eigenvalue zero, are not shown.
where
ˆΛ
∆
= Σ−1
+ Σ−1
V T
PS−1
PT
V Σ−1
= WDWT
to obtain orthogonal eigenvectors we proceed as in section 3.3 to get
ˆΛ = ˆW ˆΣ ˆWT
where ˆW
∆
= WD1/2
Q ˆΣ1/2
and Q ˆΣQT
is the diagonalization of D1/2
WT
WD1/2
.
Thus we can write
ˆL−1
p = V ˆW ˆΣ ˆWT
V T
An illustration of approximate principal warps for the fish shape is shown
in Figure 4, wherein we have used m = 15 samples. As in [3], the principal
warps are visualized as continuous surfaces, where the surface is obtained by
applying a warp to the coordinates in the plane using a given eigenvector of ˆL−1
p
as the nonlinear spline coefficients; the affine coordinates are set to zero. The
9. Approximate Thin Plate Spline Mappings 29
Fig. 5. Exact principal warps for the fish shape.
corresponding exact principal warps are shown in Figure 5. In both cases, warps
4 through 12 are shown, sorted in ascending order by eigenvalue.
Given a rank m Nystr¨om approximation, at most m−3 principal warps with
nonzero eigenvalue are available. These correspond to the principal warps at
the “low frequency” end, meaning that very localized warps, e.g. pronounced
stretching between adjacent points in the target shape, will not be captured by
the approximation.
4.3 Discussion
We now discuss the relative merits of the above three methods. From the syn-
thetic grid tests we see that Method 1, as expected, has the highest MSE. Con-
sidering that the spacing between neighboring points in the grid is about 10, it
is noteworthy, however, that all three methods achieve an MSE of less than 2
at 30% subsampling. Thus while Method 1 is not optimal in the sense of MSE,
its performance is likely to be reasonable for some applications, and it has the
advantage of being the least expensive of the three methods.
In terms of MSE, Methods 2 and 3 perform roughly the same, with Method
2 holding a slight edge, more so at 5% for the second warped grid. Method 3
has a disadvantage built in relative to Method 2, due to the orthogonalization
10. 30 G. Donato and S. Belongie
(a) (b)
Fig. 6. Comparison of Method 2 (a) and 3 (b) for poorly chosen sample locations. (The
performance of Method 1 was terrible and is not shown.) Both methods perform well
considering the location of the samples. Note that the error is slightly lower for Method
3, particularly at points far away from the samples.
step; this leads to an additional loss in significant figures and a slight increase
in MSE. In this regard Method 2 is the preferred choice.
While Method 3 is comparatively expensive and has slightly higher MSE
than Method 2, it has the benefit of providing approximate eigenvectors of the
bending energy matrix. Thus with Method 3 one has the option of studying
shape transformations using principal warp analysis.
As a final note, we have observed that when the samples are chosen badly,
e.g. crowded into a small area, Method 3 performs better than Method 2. This is
illustrated in Figure 6, where all of the samples have been chosen at the back of
the tail fin. Larger displacements between corresponding points are evident near
the front of the fish for Method 2. We have also observed that the bending energy
estimate of Method 2 ( ˜wT
A ˜w) exhibits higher variance than that of Method 3;
e.g. at a 20% sampling rate on the fish shapes warped using If = 0.3 over 100
trials, Method 2 estimates If to be 0.29 with σ = 0.13 whereas Method 3 gives
0.25 and σ = 0.06. We conjecture that this advantage arises from the presence
of the approximate basis functions in the Nystr¨om approximation, though a
rigorous explanation is not known to us.
5 Conclusion
We have discussed three approximate methods for recovering TPS mappings
between 2D pointsets that greatly reduce the computational burden. An exper-
imental comparison of the approximation error suggests that the two methods
that use only a subset of the available correspondences but take into account the
full set of target values perform very well. Finally, we observed that the method
based on the Nystr¨om approximation allows for principal warp analysis and per-
11. Approximate Thin Plate Spline Mappings 31
forms better than the basis-subset method when the subset of correspondences
is chosen poorly.
Acknowledgments. The authors wish to thanks Charless Fowlkes, Jitendra
Malik, Andrew Ng, Lorenzo Torresani, Yair Weiss, and Alice Zheng for helpful
discussions. We would also like to thank Haili Chui and Anand Rangarajan for
useful insights and for providing the fish datasets.
Appendix: Generating Random TPS Transformations
To produce a random TPS transformation with bending energy ν, first choose a
set of p reference points (e.g. on a grid) and form L−1
p . Now generate a random
vector u, set its last three components to zero, and normalize it. Compute the
diagonalization L−1
p = UΛUT
, with the eigenvalues sorted in descending order.
Finally, compute w =
√
νUΛ1/2
u. Since If is unaffected by the affine terms,
their values are arbitrary; we set translation to (0, 0) and scaling to (1, 0) and
(0, 1).
References
1. C. T. H. Baker. The numerical treatment of integral equations. Oxford: Clarendon
Press, 1977.
2. S. Belongie, J. Malik, and J. Puzicha. Matching shapes. In Proc. 8th Int’l. Conf.
Computer Vision, volume 1, pages 454–461, July 2001.
3. F. L. Bookstein. Principal warps: thin-plate splines and decomposition of defor-
mations. IEEE Trans. Pattern Analysis and Machine Intelligence, 11(6):567–585,
June 1989.
4. H. Chui and A. Rangarajan. A new algorithm for non-rigid point matching. In
Proc. IEEE Conf. Comput. Vision and Pattern Recognition, pages 44–51, June
2000.
5. M.H. Davis, A. Khotanzad, D. Flamig, and S. Harms. A physics-based coordinate
transformation for 3-d image matching. IEEE Trans. Medical Imaging, 16(3):317–
328, June 1997.
6. F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural networks
architectures. Neural Computation, 7(2):219–269, 1995.
7. M. J. D. Powell. A thin plate spline method for mapping curves into curves in two
dimensions. In Computational Techniques and Applications (CTAC95), Melbourne,
Australia, 1995.
8. A.J. Smola and B. Sch¨olkopf. Sparse greedy matrix approximation for machine
learning. In ICML, 2000.
9. G. Wahba. Spline Models for Observational Data. SIAM, 1990.
10. Y. Weiss. Smoothness in layers: Motion segmentation using nonparametric mixture
estimation. In Proc. IEEE Conf. Comput. Vision and Pattern Recognition, pages
520–526, 1997.
11. C. Williams and M. Seeger. Using the Nystr¨om method to speed up kernel ma-
chines. In T. K. Leen, T. G. Dietterich, and V. Tresp, editors, Advances in Neural
Information Processing Systems 13: Proceedings of the 2000 Conference, pages
682–688, 2001.