Hello all,
I attend the 11th Max Planck Advanced Course
on the Foundations of Computer Science at Saarbrucken, Germany.
The summary of the course has been posted by Swaprava Nath on AGTB by Noam Nisan.
http://agtb.wordpress.com/2010/08/18/swaprava-nath-reports-on-saarbrucken-agt-course/
It is followed by my comments.
So interested readers may read the blog.
---
Sujit Gujar
Research Student,
Indian Institute of Science,
Bangalore.
Monday, August 30, 2010
Wednesday, April 21, 2010
M-PREF 2010
Hello all,
This years Multidisciplinary Workshop on Advances in Preference Handling (M-PREF 2010), is going to be held in conjunction with ECAI 2010.
For more details: http://preferencehandling.free.fr/welcome.html
If you are interested in submitting any paper here, hurry up. Deadline is 7th May.
---
Sujit Gujar
Research Student,
Indian Institute of Science, Bangalore.
This years Multidisciplinary Workshop on Advances in Preference Handling (M-PREF 2010), is going to be held in conjunction with ECAI 2010.
For more details: http://preferencehandling.free.fr/welcome.html
If you are interested in submitting any paper here, hurry up. Deadline is 7th May.
---
Sujit Gujar
Research Student,
Indian Institute of Science, Bangalore.
Tolerable Manipulability in Dynamic Assignment without Money
This paper is going to appear in the proceedings 24th AAAI conference on Artificial Intelligence (AAAI '10), 2010. This is joint work with James Zou and David Parkes from Harvard University. This work was carried out when I spent my summer internship in Harvard.
We studied the problem of object assignments in dynamic settings. That is, the agents in the system are arriving dynamically and need one object. The agents have strict preferences over these objects. And no monetary transfers are involved.
Consider the collaborative project like cleaning up task on wiki. The interested users, voluntarily participate for these tasks. However, they may have preferences on these tasks. The problem is not all the agents are available at the same time. The agents will arrive dynamically and will be available till some time and may leave the system. So, if at all we have to assign a task to an agent, we have to perform this before he or she leaves. And it should try to assign agents their preferable tasks which is private information to the agents.
In static settings, Svensson's has shown that, if we need no-bossy and neutrality with respect to objects, serial dictatorship is the only strategyproof mechanism for the objects assignment. No-bossy means, nobody can dictate the outcome for the other agents without affecting himself. Using his theorem, we show that, in on-line settings, greedy algorithm, which we call as APSD, is the only strategyproof mechanism that also satisfies neutrality and no-bossy properties.
APSD:(Arrival Priority Serial Dictatorship)
As soon as an agent arrives, he is assigned his most preferred item from pool of available items.
APSD is very simple, natural mechanism and satisfies nice game theoretic properties. viz, strategyproof, no-bossy, neutrality and ex-post efficiency. However, it may happen that because of one agent arriving sooner, and getting his best choice, all the subsequent agents can get worse of. Somehow, if know that some agent is not that worse of by doing some less
preferred task as compared to others (or receiving less preferred object), then we may assign him this less preferable task/object and make others happy. Based on this, we propose a mechanism which we call as scoring rule (SR). SR is susceptible for manipulations. We characterize all possible manipulations and show that if everybody manipulates optimally, it reduces to APSD. If only fraction of agents are strategic and the others are reporting truthfully, the ex-ante performance of SR is better than APSD.
This performance is considering rank based utilities has been used by Budish and Cantillon. We assume we assume risk neutral agents with a constant difference in utility across items that are adjacent in their preference list. That is, the difference between utility of receiving object ranked 1 and ranked 2 is same as the difference between receiving object 2 and 3. Based on this assumption, we use expected rank for all the agents that they assign to their assigned object as performance metric for a mechanism.
We also compare the ex-ante performance of the SR with CONSENSUS, an on-line stochastic optimization based algorithm. SR outperforms CONSENSUS.
For technical details, please refer to our paper.
(If want a copy of it for non-commercial purpose, please let me know.)
---
Sujit Gujar
Research Student,
Indian Institute of Science, Bangalore.
We studied the problem of object assignments in dynamic settings. That is, the agents in the system are arriving dynamically and need one object. The agents have strict preferences over these objects. And no monetary transfers are involved.
Consider the collaborative project like cleaning up task on wiki. The interested users, voluntarily participate for these tasks. However, they may have preferences on these tasks. The problem is not all the agents are available at the same time. The agents will arrive dynamically and will be available till some time and may leave the system. So, if at all we have to assign a task to an agent, we have to perform this before he or she leaves. And it should try to assign agents their preferable tasks which is private information to the agents.
In static settings, Svensson's has shown that, if we need no-bossy and neutrality with respect to objects, serial dictatorship is the only strategyproof mechanism for the objects assignment. No-bossy means, nobody can dictate the outcome for the other agents without affecting himself. Using his theorem, we show that, in on-line settings, greedy algorithm, which we call as APSD, is the only strategyproof mechanism that also satisfies neutrality and no-bossy properties.
APSD:(Arrival Priority Serial Dictatorship)
As soon as an agent arrives, he is assigned his most preferred item from pool of available items.
APSD is very simple, natural mechanism and satisfies nice game theoretic properties. viz, strategyproof, no-bossy, neutrality and ex-post efficiency. However, it may happen that because of one agent arriving sooner, and getting his best choice, all the subsequent agents can get worse of. Somehow, if know that some agent is not that worse of by doing some less
preferred task as compared to others (or receiving less preferred object), then we may assign him this less preferable task/object and make others happy. Based on this, we propose a mechanism which we call as scoring rule (SR). SR is susceptible for manipulations. We characterize all possible manipulations and show that if everybody manipulates optimally, it reduces to APSD. If only fraction of agents are strategic and the others are reporting truthfully, the ex-ante performance of SR is better than APSD.
This performance is considering rank based utilities has been used by Budish and Cantillon. We assume we assume risk neutral agents with a constant difference in utility across items that are adjacent in their preference list. That is, the difference between utility of receiving object ranked 1 and ranked 2 is same as the difference between receiving object 2 and 3. Based on this assumption, we use expected rank for all the agents that they assign to their assigned object as performance metric for a mechanism.
We also compare the ex-ante performance of the SR with CONSENSUS, an on-line stochastic optimization based algorithm. SR outperforms CONSENSUS.
For technical details, please refer to our paper.
(If want a copy of it for non-commercial purpose, please let me know.)
---
Sujit Gujar
Research Student,
Indian Institute of Science, Bangalore.
Sunday, January 17, 2010
Cost Sharing Mechanisms
Hello all,
Vast amount of literature is available on cost sharing mechanisms. Here is summary of a paper by Herve Moulin and Scott Shenker, titled, "Strategyproof sharing of submodular costs: budget balance versus efficiency". This paper appeared in Economic Theory, Volume 18, Number 3 / November, 2001.
A scenario where a service is to be allocated to n agents is studied. Each agent either may receive the service or may not receive. The cost of service depends upon set of agents served and is assumed to be a submodular function through out the paper. Each agent has a maximum willingness to pay (u_i) for receiving the service. u_i s are private to the agents. Authors explore the strategyproof mechanisms for this problem to allocate the service to subset of the agents.
Due to Green-Laffont impossibility theorem, no strategy proof mechanism is efficient as well as budget balanced.
The authors first propose a class of mechanisms that is group strategy proof, budget balanced and satisfies voluntary participation (VP), No Positive Transfers (NPT) and Consumer Sovereignty (CS). These mechanisms depend upon a cross monotonic cost sharing method for the given cost function to satisfy the above properties.
One can use the cross monotonic cost sharing method of his choice, which could be anything from egalitarian to shapley or their convex combinations. Authors also show that Shapley Value based cost sharing is the one with minimal loss in efficiency.
When efficiency is more important than budget balance, then authors propose marginal cost sharing based mechanism which belongs to the class of Groves' mechanisms (Efficient + strategy proof).
Authors show if a mechanism has to satisfy reasonable properties along with efficiency and strategy proof, marginal cost sharing mechanism is the unique mechanism.
Authors compare maximal worst case welfare loss in Shapley cost sharing mechanism vs worst case maximal budget deficit in marginal cost sharing mechanism and show that the former is smaller as compared to later.
Thus authors feel budget balanced mechanisms are better as they also satisfy group strategyproof as well as it is a rich class as compared to unique efficient mechanism.
Acknowledgements:
This paper was presented in ECL, by Radhika. She also helped in editing this summary. The inital draft of the annotation of the talk was prepared by Rohith Vallam.
---
Sujit Gujar
Research Student,
Indian Institute of Science, Bangalore.
Vast amount of literature is available on cost sharing mechanisms. Here is summary of a paper by Herve Moulin and Scott Shenker, titled, "Strategyproof sharing of submodular costs: budget balance versus efficiency". This paper appeared in Economic Theory, Volume 18, Number 3 / November, 2001.
A scenario where a service is to be allocated to n agents is studied. Each agent either may receive the service or may not receive. The cost of service depends upon set of agents served and is assumed to be a submodular function through out the paper. Each agent has a maximum willingness to pay (u_i) for receiving the service. u_i s are private to the agents. Authors explore the strategyproof mechanisms for this problem to allocate the service to subset of the agents.
Due to Green-Laffont impossibility theorem, no strategy proof mechanism is efficient as well as budget balanced.
The authors first propose a class of mechanisms that is group strategy proof, budget balanced and satisfies voluntary participation (VP), No Positive Transfers (NPT) and Consumer Sovereignty (CS). These mechanisms depend upon a cross monotonic cost sharing method for the given cost function to satisfy the above properties.
One can use the cross monotonic cost sharing method of his choice, which could be anything from egalitarian to shapley or their convex combinations. Authors also show that Shapley Value based cost sharing is the one with minimal loss in efficiency.
When efficiency is more important than budget balance, then authors propose marginal cost sharing based mechanism which belongs to the class of Groves' mechanisms (Efficient + strategy proof).
Authors show if a mechanism has to satisfy reasonable properties along with efficiency and strategy proof, marginal cost sharing mechanism is the unique mechanism.
Authors compare maximal worst case welfare loss in Shapley cost sharing mechanism vs worst case maximal budget deficit in marginal cost sharing mechanism and show that the former is smaller as compared to later.
Thus authors feel budget balanced mechanisms are better as they also satisfy group strategyproof as well as it is a rich class as compared to unique efficient mechanism.
Acknowledgements:
This paper was presented in ECL, by Radhika. She also helped in editing this summary. The inital draft of the annotation of the talk was prepared by Rohith Vallam.
---
Sujit Gujar
Research Student,
Indian Institute of Science, Bangalore.
Labels:
Budget Balance,
Cost Sharing Mechanisms,
Efficiency
Monday, December 28, 2009
Elections can be Manipulated Often
Today I will talk about the paper by Firedgut Kalai and Nisan titled "Elections can be Manipulated Often".
The Gibbard-Satterhwaite (GS) theorem states any non-dictatorial social choice function (SCF)/ voting procedure is manipulable. The one of the approach to circumvent this problem is to use voting procedure that are difficult to manipulate though not impossible. That is manipulation in the SCF is NP-Complete. However, notion of NP-completeness relies on worst case complexity. It may be hard to manipulate at some type profiles but easy on other.
In this paper, authors show that when there are 3 alternatives, any random manipulation can be beneficial with non-negligible probability. Let M_i(f) be the the probability that a random manipulation is profitable for an agent i. The main theorem of the paper is:
For every \epsilon > 0, if f is \epsilon-far from dictatorship, there exists constant C such that \sum_{i=1}^n M_i(f) >= C\epsilon^2 .
That means, if f is away from dictatorship, there is some agent who has non-negligible manipulation power. Also one cannot give bound independent of n. The proof is in three steps.
First step:
As social welfare function, F is neutral, it can be determined using a single boolean function. Now, if this boolean function is epsilon away from dictatorship, then there is delta probability that F does not have condorect winner. (delta = C*epsilon)
Second step:
They define notion of manipulation by many voters. (M^{a,b}(f) ) and if this quantity is low, then we can construct IIA SWF that is very close to Condorcet winner and due to step one, f will be very close to dictatorship. That is if f is away from dictatorship, M^{a,b}(f) is also reasonably high.
Third Step:
Authors show that, M^{a,b}(f) <= 6 \sum_i M_i(f). All these three steps together imply the theorem in the paper. Its quite interesting result as well the results cited therein. For the paper: http://www.cs.huji.ac.il/~noam/apx-gs.pdf
---
Sujit Gujar
Research Student,
Indian Institute of Science, Bangalore.
The Gibbard-Satterhwaite (GS) theorem states any non-dictatorial social choice function (SCF)/ voting procedure is manipulable. The one of the approach to circumvent this problem is to use voting procedure that are difficult to manipulate though not impossible. That is manipulation in the SCF is NP-Complete. However, notion of NP-completeness relies on worst case complexity. It may be hard to manipulate at some type profiles but easy on other.
In this paper, authors show that when there are 3 alternatives, any random manipulation can be beneficial with non-negligible probability. Let M_i(f) be the the probability that a random manipulation is profitable for an agent i. The main theorem of the paper is:
For every \epsilon > 0, if f is \epsilon-far from dictatorship, there exists constant C such that \sum_{i=1}^n M_i(f) >= C\epsilon^2 .
That means, if f is away from dictatorship, there is some agent who has non-negligible manipulation power. Also one cannot give bound independent of n. The proof is in three steps.
First step:
As social welfare function, F is neutral, it can be determined using a single boolean function. Now, if this boolean function is epsilon away from dictatorship, then there is delta probability that F does not have condorect winner. (delta = C*epsilon)
Second step:
They define notion of manipulation by many voters. (M^{a,b}(f) ) and if this quantity is low, then we can construct IIA SWF that is very close to Condorcet winner and due to step one, f will be very close to dictatorship. That is if f is away from dictatorship, M^{a,b}(f) is also reasonably high.
Third Step:
Authors show that, M^{a,b}(f) <= 6 \sum_i M_i(f). All these three steps together imply the theorem in the paper. Its quite interesting result as well the results cited therein. For the paper: http://www.cs.huji.ac.il/~noam/apx-gs.pdf
---
Sujit Gujar
Research Student,
Indian Institute of Science, Bangalore.
Thursday, December 17, 2009
ACM EC'10 and AAAI'10
ACM EC: http://www.sigecom.org/ec10/ Dead line for paper Submission, 11 Jan 2010.
AAAI : http://www.aaai.org/Conferences/AAAI/aaai10.php
(CFP : http://www.aaai.org/Conferences/AAAI/2010/aaai10call.php)
Dead line for paper Submission, 18 Jan 2010.
---
Sujit Gujar
Research Student,
Indian Institute of Science, Bangalore.
AAAI : http://www.aaai.org/Conferences/AAAI/aaai10.php
(CFP : http://www.aaai.org/Conferences/AAAI/2010/aaai10call.php)
Dead line for paper Submission, 18 Jan 2010.
---
Sujit Gujar
Research Student,
Indian Institute of Science, Bangalore.
Sunday, November 29, 2009
Budget Balanced Mechanism Design
On one fine day, in a lucky draw, I got 4 tickets for a movie show on Friday evening. Having lot of good friends, I was unable to choose 3 friends to take to the movie with me. I decided I should take those three friends who will have maximum enjoyment of the movie on that evening. However, all my friends are intelligent and rational, will report they will enjoy it most. So, I have to quantify somehow their enjoyment and get correct information about their enjoyment. I will say they have some valuations for the movie ticket. Now, as being student in game theory, I planned to use mechanism to illicit the true valuations. I will charge friends for the ticket in such a way that they will be willing to tell me their exact valuation for it. The sole purpose of charging friends is for getting some information, otherwise I am not interested in making any money. I decided I will redistribute the money collected among friends. Soon, I realised, Green-Laffont already proved that I can not design an mechanism which will allot these tickets to value it most, will illicit truth and net transfer in the money is zero. :(
This is really bad news. But have to do something. Okay. Let this ticket allotment is not budget balanced. But how much money I have to inject or I will be left with in the worst case. NO NO...what I am thinking, injecting money. Not at all, I am not going to pay to friends for them to tell me true information. So, I have to ensure that I am never going to pay from my pocket for allotting the free tickets.
So, now, in mechanism design terminology, my problem is to design an mechanism, in which the tickets are assigned to those who value it most, that is the outcome is allocatively efficient (AE). It is in best response for each agent to tell their true values, that is it is dominant strategy incentive compatible(DSIC). By being my friend and participating in mechanism nobody should get net utility to be negative, which means the mechanism is individually rational (IR) and I am not ending up paying for this mechanism, that is it is feasible (F).
Good, now I know what I want to do in terms of mechanism design. Research papers by Guo and Conitzer or by Moulin, are handy here. What did these two papers do?
(They do more than what I need, but I will describe only what I needed here.)
In these papers mainly they address the following problem:
p copies of an item are available to be distributed among n competing agents. (Objects being identical, this is referred to as homogeneous case.) As a social planner you are supposed to assign these objects to those p agents, (assuming unit demand, though the authors also work with multi unit demand, I won't talk about it here) who value it most. So, Goal is to design a mechanism that is DSIC, allocatively efficient (AE), individually rational (IR), feasible (F) for assigning p objects among n competing agents.
Now, as Groves theorem, we have to look for Groves mechanism if we need AE+DSIC. Clearly we have to use Groves payments scheme after choosing p top most bidders. The classical VCG mechanism will charge every agent (p+1)-highest bid by allotting the objects to p highest bidders.
So, we will be left will be p*(p+1)st bid. This is VCG surplus in the system. Way back in 1979, Maskin, (Noble Laureate 2007) suggested to redistribute this surplus among participating agents in such a way that other game theoretic properties are preserved. That is agents receive rebate from the mechanism. So we have to design rebate functions for n agents. In the papers by Moulin and also by Guo and Conitzer, they design rebate functions which minimizes the budget imbalance (or maximizes the fraction of the money received from VCG mechanism that gets redistributed with the proposed rebate functions) on worst case analysis. They also prove that no other mechanism can perform better than these mechanisms on the worst case analysis. And the both the mechanisms (that is, rebate functions) are identical, though they start with slightly different formulation. These nice rebate functions are linear functions of the bids of the remaining agents.
Now, what if, I had received 4 movie tickets in lucky draw which are for 4 different movies?
I will keep the one of that is most interesting for me and distribute remaining three among my friends who value them most. However, now I cannot use Moulin's or Guo's paper. Each friend may be valuing different movies in different fashion. So, now they will submit different bids for different movies. In short they will be submitting 3 real numbers to me. What will happen if there are p different objects, (that is heterogeneous objects), n agents each competing for one out of these p objects? I again started reading these two papers. I realised ohh, their rebate functions are linear in terms of bids of remaining agents. Let me also start with the similar rebate functions. Then I realised that on worst case analysis, if I have to preserve IR+AE+DSIC+F, no linear rebate function will be able to guarantee me non-zero fraction of the VCG surplus to be distributed back.
Then I started immediately conjecturing that no mechanism will be good. (Too fast to jump to conclusion, which eventually was wrong). Then soon I realised that, as opposed to homogeneous objects case, here the VCG surplus received is not linear function of the bids received. So linear functions anyway would not do well. Can there be any non-linear function which will do well. Then I just used functions from Cavello's paper or Bailey's paper applied in the heterogeneous objects case. I could prove that when there are sufficiently large number of agents, the scheme will definitely redistribute non-zero fraction OF VCG surplus. Can it be worst case optimal? Unlikely.
Then, I thought already for homogeneous case, we have optimal rebate functions. Homogeneous case is special case of heterogeneous case. That is rebate functions in my case have larger domain than homogeneous case. Can I use some kind of function extension so that if I use the new rebate functions, for homogeneous objects case the new rebate functions match with Moulin's rebate functions? Yes, I could some how rewrite the original functions in such a manner that I can use them now as well it is worst case optimal when objects are identical. However, I could not show TR+F. (They are designed in such a way that AE+DSIC are preserved). Then I did ample number of simulations and found that on the generated instances, our mechanism is IR+F and worst case optimal. The proofs are still illusive.
The other directions from impossibility of linear rebate functions guaranteeing non-zero fraction XXXX (do not want to re write whole thing, I am sure you got it what I meant here),
resitrct the domain of valuations. Suppose, the valuations of the objects are different but correlated. Like say first objects is more favourable than second one and so on. Further more this co-relation is such that, for each agent, we can derive valuations for all the objects from just a single real number. Now, the problem is simple. Though it looks like Guo's problem, it is more general than their paper. However, now private information being single dimensional, I could design an optimal mechanism in this context using approach that of Guo.
If you want to know more about mechanism design, vast literature is avaialbe on internet. (However, being rational/selfish) I would definitely recommend our tutorials on mechanism design:
Foundations of mechanism design: A tutorial Part 1-Key concepts and classical results
( http://www.springerlink.com/index/N262X20600110437.pdf or http://www.ias.ac.in/sadhana/Pdf2008Apr/83.pdf )
Foundations of mechanism design: A tutorial Part 2-Advanced concepts and results
( http://www.springerlink.com/index/F785R8R08N075254.pdf or http://www.ias.ac.in/sadhana/Pdf2008Apr/131.pdf)
For redistribution mechanisms for heterogeneous objects: (Here you can get Moulin's paper or Guo's paper in references)
Redistribution of VCG Payments in Assignment of Heterogeneous Objects
(http://www.springerlink.com/content/u5vhnh4386618430/?p=bb5cc5f1a3ba40748f9e7f93d12312c6&pi=0)
For impossibility of linear rebate functions and linear rebate functions in co-related domains:
Redistribution Mechanisms for Assignment of Heterogeneous Objects
(http://ceur-ws.org/Vol-494/famaspaper1.pdf)
Chalo then,
Bye for now, Royal Sujit has to sign off...
---
Sujit Gujar
Research Student,
Indian Institute of Science, Bangalore.
This is really bad news. But have to do something. Okay. Let this ticket allotment is not budget balanced. But how much money I have to inject or I will be left with in the worst case. NO NO...what I am thinking, injecting money. Not at all, I am not going to pay to friends for them to tell me true information. So, I have to ensure that I am never going to pay from my pocket for allotting the free tickets.
So, now, in mechanism design terminology, my problem is to design an mechanism, in which the tickets are assigned to those who value it most, that is the outcome is allocatively efficient (AE). It is in best response for each agent to tell their true values, that is it is dominant strategy incentive compatible(DSIC). By being my friend and participating in mechanism nobody should get net utility to be negative, which means the mechanism is individually rational (IR) and I am not ending up paying for this mechanism, that is it is feasible (F).
Good, now I know what I want to do in terms of mechanism design. Research papers by Guo and Conitzer or by Moulin, are handy here. What did these two papers do?
(They do more than what I need, but I will describe only what I needed here.)
In these papers mainly they address the following problem:
p copies of an item are available to be distributed among n competing agents. (Objects being identical, this is referred to as homogeneous case.) As a social planner you are supposed to assign these objects to those p agents, (assuming unit demand, though the authors also work with multi unit demand, I won't talk about it here) who value it most. So, Goal is to design a mechanism that is DSIC, allocatively efficient (AE), individually rational (IR), feasible (F) for assigning p objects among n competing agents.
Now, as Groves theorem, we have to look for Groves mechanism if we need AE+DSIC. Clearly we have to use Groves payments scheme after choosing p top most bidders. The classical VCG mechanism will charge every agent (p+1)-highest bid by allotting the objects to p highest bidders.
So, we will be left will be p*(p+1)st bid. This is VCG surplus in the system. Way back in 1979, Maskin, (Noble Laureate 2007) suggested to redistribute this surplus among participating agents in such a way that other game theoretic properties are preserved. That is agents receive rebate from the mechanism. So we have to design rebate functions for n agents. In the papers by Moulin and also by Guo and Conitzer, they design rebate functions which minimizes the budget imbalance (or maximizes the fraction of the money received from VCG mechanism that gets redistributed with the proposed rebate functions) on worst case analysis. They also prove that no other mechanism can perform better than these mechanisms on the worst case analysis. And the both the mechanisms (that is, rebate functions) are identical, though they start with slightly different formulation. These nice rebate functions are linear functions of the bids of the remaining agents.
Now, what if, I had received 4 movie tickets in lucky draw which are for 4 different movies?
I will keep the one of that is most interesting for me and distribute remaining three among my friends who value them most. However, now I cannot use Moulin's or Guo's paper. Each friend may be valuing different movies in different fashion. So, now they will submit different bids for different movies. In short they will be submitting 3 real numbers to me. What will happen if there are p different objects, (that is heterogeneous objects), n agents each competing for one out of these p objects? I again started reading these two papers. I realised ohh, their rebate functions are linear in terms of bids of remaining agents. Let me also start with the similar rebate functions. Then I realised that on worst case analysis, if I have to preserve IR+AE+DSIC+F, no linear rebate function will be able to guarantee me non-zero fraction of the VCG surplus to be distributed back.
Then I started immediately conjecturing that no mechanism will be good. (Too fast to jump to conclusion, which eventually was wrong). Then soon I realised that, as opposed to homogeneous objects case, here the VCG surplus received is not linear function of the bids received. So linear functions anyway would not do well. Can there be any non-linear function which will do well. Then I just used functions from Cavello's paper or Bailey's paper applied in the heterogeneous objects case. I could prove that when there are sufficiently large number of agents, the scheme will definitely redistribute non-zero fraction OF VCG surplus. Can it be worst case optimal? Unlikely.
Then, I thought already for homogeneous case, we have optimal rebate functions. Homogeneous case is special case of heterogeneous case. That is rebate functions in my case have larger domain than homogeneous case. Can I use some kind of function extension so that if I use the new rebate functions, for homogeneous objects case the new rebate functions match with Moulin's rebate functions? Yes, I could some how rewrite the original functions in such a manner that I can use them now as well it is worst case optimal when objects are identical. However, I could not show TR+F. (They are designed in such a way that AE+DSIC are preserved). Then I did ample number of simulations and found that on the generated instances, our mechanism is IR+F and worst case optimal. The proofs are still illusive.
The other directions from impossibility of linear rebate functions guaranteeing non-zero fraction XXXX (do not want to re write whole thing, I am sure you got it what I meant here),
resitrct the domain of valuations. Suppose, the valuations of the objects are different but correlated. Like say first objects is more favourable than second one and so on. Further more this co-relation is such that, for each agent, we can derive valuations for all the objects from just a single real number. Now, the problem is simple. Though it looks like Guo's problem, it is more general than their paper. However, now private information being single dimensional, I could design an optimal mechanism in this context using approach that of Guo.
If you want to know more about mechanism design, vast literature is avaialbe on internet. (However, being rational/selfish) I would definitely recommend our tutorials on mechanism design:
Foundations of mechanism design: A tutorial Part 1-Key concepts and classical results
( http://www.springerlink.com/index/N262X20600110437.pdf or http://www.ias.ac.in/sadhana/Pdf2008Apr/83.pdf )
Foundations of mechanism design: A tutorial Part 2-Advanced concepts and results
( http://www.springerlink.com/index/F785R8R08N075254.pdf or http://www.ias.ac.in/sadhana/Pdf2008Apr/131.pdf)
For redistribution mechanisms for heterogeneous objects: (Here you can get Moulin's paper or Guo's paper in references)
Redistribution of VCG Payments in Assignment of Heterogeneous Objects
(http://www.springerlink.com/content/u5vhnh4386618430/?p=bb5cc5f1a3ba40748f9e7f93d12312c6&pi=0)
For impossibility of linear rebate functions and linear rebate functions in co-related domains:
Redistribution Mechanisms for Assignment of Heterogeneous Objects
(http://ceur-ws.org/Vol-494/famaspaper1.pdf)
Chalo then,
Bye for now, Royal Sujit has to sign off...
---
Sujit Gujar
Research Student,
Indian Institute of Science, Bangalore.
Subscribe to:
Posts (Atom)