Adding Implementation to Interfaces with Extension Methods

by jmorris 21. January 2011 22:58

One of the quirks of extension methods added to the .NET 3.5 release is that you can add them to interfaces;  seemingly adding implementation to interfaces. By definition, in the CLR (and most other languages or platforms) interfaces contain only the signatures of methods, delegates, properties or indexers. By implementing an interface in a class, the body of the method, delegate, property or indexer is added…the implementation is added.

For example:

image

Here I am extending the IMessage interface with a couple of methods for simplifying access to value pairs contained within a IPrimitiveMap (a hash map implementation).  A couple of unit tests illustrates the usage:

image

Truthfully, you are truly adding nothing the interface! No implementation is actually added…basically a static class is created with the extension methods and some compile time kung fu enables you to invoke the method off of the interface definition. Extension methods are not of the domain of the CLR, they are of the domain of the compiler via the System.Runtime.CompilerServices.ExtensionAttribute.

I would imagine this is old news for most seasoned .NET developers, but I just kind of stumbled upon it myself today and thought it was interesting with respect to interfaces and implementation :)

Tags: ,

C#

Refactoring 101: Method Groups

by jmorris 4. November 2010 14:10

C# allows method group conversions, which simplify the invocation of delegates within your code. This is a feature that was added to C# 2.0 and when combined with the Linq extensions provided with .NET 3.5, you can drastically shorten and simplify code.

“Similar to the implicit anonymous method conversions described in §13.5, an implicit conversion exists from a method group (§14.1) to a compatible delegate type. If D is a delegate type, and E is an expression that is classified as a method group, then D is compatible with E if and only if E contains at least one method that is applicable in its normal form (§14.4.2.1) to any argument list (§14.4.1) having types and modifiers matching the parameter types and modifiers of D.”

Basically what the above means that the compiler is “smart” enough to infer the correct overload to call given that their is an adequate candidate method available. For example, given the following two methods:

image

First we can refactor the for loop using lambda expression and the Linq extensions:

image

Then simplify the lambda expression even further by substituting for the implicit method group conversion:

image

Note that the green squiggly lines are hints made by Resharper that the line of code can be refactored. If your not aware of Resharper, it’s a Visual Studio add on that turns VS from a Pinto to Ferrari! If you don’t believe me, try the free trial here. Ok, enough cool-aid and free marketing for resharper…

So, your probably thinking one of three things about now (assuming  you made it this far):

  1. “Big fricken deal, he saved five lines of code”
  2. “Eh, old news. Moving on.”
  3. Wow, that’s fricken awesome dude!”

Personally, I tend towards #3. I am a huge fan  (obviously) of method group conversions because they reduce complexity. They simply make the code easier to read and digest. Code that is easier to read and digest is more easily maintained. Code that is easier to maintained, tends to be of higher quality and less error prone.

References

Tags: , , , ,

Refactoring | Resharper | C#

ActiveMQ via C# using Apache.NMS Part 2 - Queues

by jmorris 29. July 2010 13:35

ActiveMQ via C# Part 2
In my part 1 of this series, I discussed the basics of JMS and messaging schematics: publish/subscribe and sender/receiver. I introduced Apache.NMS and the ActiveMQ client for NMS, Apache.NMS.ActiveMQ. I created client classes for publishing messages to ActiveMQ via topics and subscribing to a topic to receive messages. Finally, I tested the classes by writing a unit test that contained a publisher that sent messages to a topic and a subscriber that wrote the output to the console. In this example, I will expand on C# client by writing a simple, reusable configuration based API for sending and receiving messages using ActiveMQ and Apache.NMS.

Scenario
In situations where you have multiple projects in need of a common messaging infrastructure, relatively low messaging domain knowledge across teams and an integration requirement between .NET and other Java based systems, a simple wrapper over a more complex JMS/NMS API can quicken client development. Additionally, the JMS/NMS programming model lends itself to a configuration based API that utilizes factory methods to create configured objects for sending and receiving objects.

The benefit of going with this model is that users of the API can get up and running quickly and need very little understanding of the nuances of JMS/NMS. Also, APIs like this make it easier to manage deployments amongst the different software environments in most organizations: dev, staging, prod, etc.

More Messaging Schematics
JMS provides two major schemes for sending and receiving messages: point-to-point (PTP) and publish/subscribe (Pub/Sub). PTP messaging is always queue based, while the Pub/Sub model uses topics.

In JMS speak topics and queues are simple message destinations or channels that producers send messages to and consumers listen to via notification semantics that messages are available. For topics, each consumer that subscribes to a topic will receive a copy of a message if the consumer is an active subscriber at the time the broker receives the message. If a producer sends a message to a destination and there are ten consumers actively subscribing to the destination, all ten consumers will receive a copy of the message. Pub/Sub is useful in which you want every consumer or client to receive a message sent to a destination by a producer:


On the other hand, queues are like load balancers: if a producer sends a message to the destination, then the message will be kept by the broker until a consumer is available and then the message will be delivered. Note that if there is one producer and ten consumers of the queue, exactly one consumer will receive a copy of the message. The messages are load-balanced in that if another message is sent, the next consumer will receive a copy of the message and then the next, and so on and so forth as messages are sent to the destination. A scenario where this is useful is where you have say a pool of processes and you want each message sent to the pool to be received by a single process and each subsequent message to be processed by exactly one other, different process within the pool.



In part 1 I created a consumer and producer of messages using pub/Sub schematics, in this example I will augment this API with classes for sending message using PTP messaging schematics. These classes will allow clients to send and receive messages to queues with consumers of these messages receiving at most one copy of the message that was sent from a producer. Additionally, I’ll provide a simplified API for using ActiveMQ across multiple projects.

Utilizing Queues – PTP Messaging
As I stated previously, queue based messaging is always PTP; a producer sends a message to a queue and exactly one consumer will receive a copy of the message. For our API we will define a producer and a consumer of messages. First the producer:


This is simple class takes a session and a string identifying the destination or queue that we will be sending messages to as constructor arguments. Usage is even simpler, just call the SendMessage method and pass in a string to send. Note that this could be augmented to send more complex message types, but for simplicity I have omitted anything other than string messages.

The consumer class is likewise rather simple and has a constructor that takes the same two arguments, a session and a string identifier of the queue that messages will be received from:

The QueueReceiver requires that you register with a delegate that is fired when a message is received by the consumer. The client starts listening to the by calling the Start method and passing in a string that uniquely identifies the client. Like subscribers (and all other message consumers), the QueueReceiver should be a long-lived object, with a lifetime of the entire application.

The entire API, looks like the following:


Usage is very simple, provide a long-lived consumer object with a queue as its destination and then create shorter lived producer objects that send messages to the broker. The following unit test shows an example of the round-robin way that messages are received by the registered consumers of a queue:

Tags: , , , ,

Blog

Generating Data Transfer Objects with Seperate Files from a Schema Using T4

by jmorris 3. December 2009 04:21

T4, or Text Template Transformation Toolkit, is a template based solution for generating code that is built into VS2008 (it's also available as an add in VS2005). Alas, it has minimum support in VS2008 in that there are are no Visual Studio Template for adding a T4 template - you cannot just right click Add > New Item and add a T4 file to you project. However, you can create a new file and change the extension to ".tt" and VS will know that you are adding a T4 file to the project (after a security prompt asking you if you want to really add a T4 file) and create the appropriate T4 template and .cs code behind file that accompanies each .tt file. For a detailed explaination of how to do this, please see Hanselman's post here.

T4 templates are pretty cool once you get the hang of some of nuances of the editor, which is somewhat lacking in features (reminds me of using Notepad to write Assembly code in college). There are in fact at least two VS add ons that add some degree of intellisense and code/syntax highlighting: Clarius Visual T4 and Tangible T4 Editor for VS. They both offer free developer editions with limited functionality if you just want to get a feel for what they can do without forking out the cash.

Out of the box, T4 templates have one glareing weakness: only one template (.tt) file can be associated with one output file (.cs, et al). This is not really ideal in that we typically associate one source code artifact (class, sproc, etc) with it's own file. This makes it easier to grok, manage, read a projects source and also makes versioning easier using version control software such as Git or Subversion. It's much easier to track changes to a single class in a file than multiple classes in a file. With a little work and a little help it is possible to generate multiple source files from template files, however.

So, T4 aside, what are Data Transfer Objects (DTOs) and why do need them? DTOs are exactly what they propose to be: objects containg data, typically corresponding to a single record in a database table. In my opinion, they are anemic in that they contain NO behavior whatsoever. They are the data...constrast this with domain objects, which are data and behavior. A very typical scenario involving DTOs are situations where data must be moved from one part of the system, to another where they are consumed and used, likely by a domain object(s). For instance, we may use an ORM such as NHibernate or Entity Framework to access the database, bring back a set of data and then map it a DTO.

In many cases your DTOs are mapped directly to your database schema in that there is a one to one mapping between table and object field or property. In this situation, manually creating an object per entity becomes tedious and scales poorly in terms of developer productivity. This is where using a code generation solution, such as one created with T4 really shines.

For example, given the following schema, generate a DTO for each entity:

The first step is getting enough information about the tables from the database metatables so that you can generate objects for each table. Assuming you are mapping to pure DTOs, you can ignore any of the relationships in the form of 'Has A' in your objects. It's not these relationships do not exist; they do, just not explictly. Instead you maintain the relationships through properties that represent the foriegn keys between the objects. An obvious benefit to this sort of convention is that you immediatly resolve the potential n+1 problems inherit with ORMs and lazy loading, at the expense some other features of ORMs, such as object tracking. IMO ignoring these relationships is a personal preference as well as a architectural concern; for this example I am ignoring these relations.

The following sproc is an example of how to get this data from a database, in this case I am using MS SQL Server 2008:



This stored procedure returns a record describing the table and each column of the table or at least the relevent parts: name, data type, and whether or not the column is a primary key. This sproc is called from a T4 template to load a description of each table and it's columns into memory. Here is the relevent code:



In order to generate seperate files for each artifact generated, we will be using three seperate T4 templates: GenerateEntities.tt, Entity.tt, and MultiOuput.tt. GenerateEntities.tt contains the code above as well as another code block in which it loops through the results returned from GetTypes() and uses the Entity.tt template to generate the artifact. The MultiOuput.tt takes the code generated by GenerateEntities.tt and Entity.tt to write the output file to disk and add the file Visual Studio. Note that MultiOutput.tt comes from one of many excellant posts by Oleg Sych and that an updated version of the file is purported to be available with the T4 Toolkit up on CodePlex.com.




The code above loops through the table definitions and uses remoting to set a property on the Entity.tt template with each value. Finally, the MultiOutput.tt.ProcessTemplate(...) method is called which writes the output of Entity.tt to disk and adds the file to Visual Studio. Entity.tt is pretty straight forward:



When the solution is saved, the T4 templating engine will process the templates a file will be added to Visual Studio under the GenerateEntities.tt file that represents a DTO for each table in the database.



References:

  1. http://www.hanselman.com/blog/T4TextTemplateTransformationToolkitCodeGenerationBestKeptVisualStudioSecret.aspx
  2. http://www.codeplex.com/t4toolbox
  3. http://www.olegsych.com/2008/03/how-to-generate-multiple-outputs-from-single-t4-template/

Tags: , , , , ,

Cleaning up XmlWriter and IXmlSerializable with Extension Methods

by jmorris 4. November 2009 19:46

If you do any work with xml you probably have come across scenarios where you are using an XmlWriter to produce an output stream of xml. Eventually this output stream is either persisted to disk via an XDocument, sent over the wire using a distributed technology such as WCF, Remoting etc., or possibly transformed with XSL/XSLT. A strong example is custom serialization classes that implement IXmlSerializable.  For example:

The class above is a simple data transfer class (DTO) that implements IXmlSerializable so that it can be serialized and/or deserialized from an objet to an xml stream and vice versa. Note: in most cases you would simple mark the class as [Serializable] and/or provide attributes from the System.Xml namespace to provide the same behavior, however in many cases the default implemention will not fit your particular scenario, hence you would implement IXmlSeriable and provide your own custom serialization.

Here is the 'custom' serialization implementation:


While the XmlWriter/XmlReader API's are pretty simple to use, they are also a bit verbose. If you happen to have a fairly large class with many fields, things start to get ugly pretty fast. Typically when I see large classes, I began to think about refactoring into smaller classes when applicable, but that not always the case. Since, most of them time when want serialization/deserialization you simple want to quickly (i.e. less keystrokes) turn the contents and structure of the class into its xml equivalent you are looking at reducing the amount of work needed. This is where extension methods really come in handy:



The result compared to above is a much cleaner, easier to read class:


While extension methods are not new, they do offer unique way of handling situations where you would like to simplify a set of operations without reaching for the traditional static xxxUtil class or creating a customized implementation or wrapper class. In this case,  XmlWriter is a class open for extension via basic inheritance, unlike a sealed class such as System.String, which is the intended purpose of extension methods: extended classes closed to inheritance (sealed).

Tags: , , , , , , ,

Using the Repository Pattern with the Command and DataMapper Patterns

by jmorris 1. September 2009 22:00

This post nearly completes the API defined in my earlier posts on the DataMapper pattern and the Command Pattern that shows a solution for executing queries against a remote service and mapping the results to POCO objects. The Command pattern implementation gave us a means of creating client requests with various combinations of parameters and allowed the query to be executed against the remote service. The DataMapper allowed us explicitly map the results of the query as a response in the form of an XML stream (SOAP Body) to objects on the client via CLR Attributes. To succinctly complete the union between the data store, the mapping layer and the domain layer we use the Repository pattern: “A Repository mediates between the domain and data mapping layers…Client objects construct query specifications declaratively and submit them to the Repository for satisfaction.”  - Fowler [http://martinfowler.com/eaaCatalog/repository.html]

From an [earlier post], here is the whole, anemic API (with the exception of the WCF client used to communicate with the service):

From the model above, it’s pretty simple: commands are built by the client and executed against the service (the data store in this case), results are then mapped to POCO objects via .NET attributes and a little reflection. The repository becomes an intermediary between domain model, mapping and data store; it manages creation of the command object (and some parameter aspects) and then facilitates the mapping of the result set, hiding much of the heavy lifting (so, to speak) from the client. The client then gets a nice, clean typed object to use complete with intellisense and compiler support; much better than DataSets or XPath and XML.

Depending upon the implementation, the Repository lends itself to a rather simple structure; most of the moving parts involve the objects that it uses internally. Here is the code for the abstract IRepository class:

public abstract class Repository<T> : IRepository<T> where T : IEntit
    {
        protected Repository(IDataMapper<T> dataMapper, IContentCommand command)
        {
            DataMapper = dataMapper;
            Command = command;
        }

        #region IRepository<T> Members
 
        public virtual List<T> GetAll(IQueryRequest request)
        {
            IQueryResponse response = Command.Execute(request);
            using (var reader = new XmlTextReader(new StringReader(response.Xml)))
            {
                return DataMapper.MapAll(reader);
            }
        }
 
        public virtual T Get(IQueryRequest request)
        {
            request.Start = 0; request.Limit = 1;
            IQueryResponse response = Command.Execute(request);
            using (var reader = new XmlTextReader(new StringReader(response.Xml)))
            {
                return DataMapper.Map(reader);
            }
        }
 
        public T Get(IQueryRequest request, out int count)
        {
            request.Start = 0; request.Limit = 1;

            IQueryResponse response = Command.Execute(request);
            count = response.ResultCount;
 
            using (var reader = new XmlTextReader(new StringReader(response.Xml)))
            {
                return DataMapper.Map(reader);
            }
        }
 
        public List<T> GetAll(IQueryRequest request, out int count)
        {
            IQueryResponse response = Command.Execute(request);
            count = response.ResultCount;
 
            using (var reader = new XmlTextReader(new StringReader(response.Xml)))
            {
                return DataMapper.MapAll(reader);
            }
        }
 
        public Dictionary<string, List<IEntity>> GetAll(IBatchQueryRequest queries)
        {
            var results = new Dictionary<string, List<IEntity>>();
            IBatchQueryResponse response = Command.Execute(queries);
            for (int i = 0; i < response.Responses.Count(); i++)
            {
                IQueryRequest query = queries.Requests[i];
                using (var reader = new XmlTextReader(new StringReader(response.Responses[i].Xml)))
                {
                    var mapped = DataMapper.MapAll(reader) as List<IEntity>;
                    results.Add(query.Name, mapped);
                }
 
            }
            return null;
        }
 
        public IDataMapper<T> DataMapper { get; set; }
 
        public IContentCommand Command { get; set; }
 
        public bool EnableServiceLayerCaching { get; set; }
 
        #endregion
    }

Classes that derive from this are equally minimal in that they just override the constructor with appropriate IDataMapper<T> and IContentCommand implementions and they are ready to go. Note that the design supports Dependency Injection (DI) , which facilitates unit testing by providing hooks in which faked or mocked objects can be passed in by passing a direct dependency upon the service layer. My original implementation used a pure file based XML version independent of the remote service.

Tags: , , , ,

Implementing the Command Pattern

by jmorris 19. August 2009 22:38

Command Pattern Background

One of the most ubiquitous software patterns in existence is the Command Pattern:  “Encapsulate a request as an object, thereby allowing for the parameterization of clients with different requests, queue or log requests, and support undoable operations” – GOF. It is generally used in situations where the all of the information necessary for a future call can be ‘built up” sequentially and finally executed at a point in time in the future – i.e. each action is state-full.

A common scenario is an object used to build up a database request taking in several different parameters such as the database name and location, the stored procedure name, any data that needs to be passed to the stored procedure and possibly a transaction object. Once the command has been constructed and all required parts are defined, it is executed, which delegates an action to call the stored procedure on the database and return a result set or perform some operation on an object, with the provided parameters. In the C# world, we are talking about System.Data.DbCommand and its concrete classes such as System.Data.SqlClient.SqlCommand or any other database specific implementation of DbCommand (Oracle, MySql, etc.).

Here is the basic UML diagram of the Command Pattern:

The Command Pattern typically utilizes the following actors:
  1. Client – creates the command object providing the relevant information (parameters) needed to fulfill its obligations
  2. Invoker – initiates the action on the command so that it’s obligations are fulfilled, making the decision of which of the commands actions should be called
  3. Receiver – performs the command object’s obligations, whatever they might be – i.e. makes database call or some other action

The Command Pattern Implemented

The scenario is this: a service exists which takes as input a series of inputs (a query) and returns as output, the results of those inputs as an xml stream of attributes and values. Caching at the service level and several operations are also supported: filtering, sorting, paging, etc.

Each request varies by parameters and is encapsulated as a query object defined by the client. This query object is simply a data structure that contains the fields accepted optionally be the service. The query object is executed by a command object created by the client, who delegates control to the receiver encapsulated by it’s invoking one of its action methods.

The following diagram illustrates the implementation:

 

 

The Command implementation is very simple. It contains an ICommandReceiver reference and IQueryRequest reference which stores a reference to the current query being executed. The command delegates control to the receiver, which performs its work and returns back an IQueryResponse object containing the results of the query.

 

The ContentReceiver class implements ICommandReceiver interface and simply makes the call to the remote service via a WCF client. This happens to be a very simplistic implementation in that much of the heavy lifting (i.e. caching, query generation, etc.) occurs in the service. In this case, we are simply translating a client query into a SOAP request to access the services resources.


The client uses an IQueryRequest object to build up a set of parameters which are consumed by the service endpoint via the command object. The results are returned to the client with an IQueryResponse object. How the results are used is dependent upon the client.




Finally, a simple unit test illustrating the usage:

Tags: , , , , ,

DataMapper Pattern Implementation with XML and Attributes

by jmorris 21. July 2009 22:17

The DataMapper Pattern is a well documented and defined pattern for abstracting away an object’s storage from its in-memory representation. For the most part the pattern explicitly defines the domain of the pattern to be relational databases and object’s that map to there schema, however the pattern is adaptable to non-relational data-stores as well. In fact to applicable to any situation where the need to map records (related data units) to in memory objects; regardless of structure or underlying store.

Imagine, for instance, a service that is a façade for a complex matrix of unstructured key value pairs and a protocol for clients to query the service for tuples. For example:

  • For the make ‘honda’, give me all ‘years’ 
  • For the article ‘2311’, give me ‘title’, ‘description’, ‘author’, ‘publication_date’

This is not too much different from any relational database model, with the exception that there is no direct joining occurring to return back columns that occur in different tables and relate them to the input set. It’s basically just a giant hash-map of name/values pairs with an undefined structure…i.e. there is no way to directly map it to a domain model; what you supply as inputs, defines the structure of your output.

That’s where the DataMapper Pattern comes into play. According to Fowler, the DataMapper is a “a layer of Mappers that move data between object and a database while keeping them independent of each other and mapper itself”. The core motivation behind the pattern is separation of concerns; an abstraction between the data, the store, and entity objects themselves. By using a DataMapper we can associate the name/value pair scenario described above with our domain objects, providing structure to our model. This is important because we go from an untyped hash-map model to tangible objects (POCO) that are typed, thus allowing us the benefits of developer friendly features of our IDE, such as ‘intellisense’ and compile time type safety.

Implemented within an API, you get something that looks like the following:

Think of it as a minimalistic ORM. In this case such concerns as caching and transactions are the responsibility of the underlying data-store service. As a consumer we are only concerned with making a request for data, getting a result set, and mapping it to an entity or domain object. In the diagram above, there are actually three different patterns working to create the whole: the command pattern, the repository pattern, and the data mapper pattern. All three comprise the entire API.

A Simple DataMapper Implementation:
This implementation of a DataMapper really does just three things: a) it reads through an XML stream, b) it maps to the values of specific XML attributes to an associated property, and c) packages the results into either a collection or a single object of the Type provided defined in the generic argument:

Map maps a single result to a single object. MapAll maps multiple results to a list of objects, and Read is a private method that extracts all key value pairs (XML attributes and there associative values) to a dictionary.

All Read does is extract out the XML attribute names and there associative data and store them in a dictionary. The dictionary is then used by the Map and MapAll methods along with a System.Attribute class to map the values directly to the correct properties:

In the code snippet above, after the Read method has extracted the data for mapping we loop over each of the objects properties looking for a FieldMappingAttribute on each of the properties we have flagged for mapping. If a match is found the relevant data is mapped directly to the object’s property and then the object is returned.

 

MapAll does essentially the same as Map, but assumes that the stream contains more than one object definition and then returns the resulting collection.

A couple of things to note here, the code a) needs a little refactoring to improve performance and extensibility, b) we are assuming a relatively flat xml file streamed here, no nesting of elements for example and c) we are only supporting properties of the Type System.String . The simplicity is purposeful…when I need it I’ll add it.

One last thing to discuss is implementation of FieldMappingAttribute and its usage. FieldMappingAttribute is a simple class deriving from System.Attribute that allows us to decorate properties on our entity classes with attributes that define our mapping schema:

The usage is pretty simple and should be familiar to most .NET developers:

References:

Tags: , , , ,

WCF and Large Message Bodies

by jmorris 22. June 2009 16:39

I have been running into some situations WCF barfs on calls where the message body is very large:

"The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Line 3, position 9381."

Took a little poking around to see what exactly was going wrong here, but a couple of posts got me going in the right direction. In fact, this is a real excellent exception message in that it tells me exactly what I need to do to resolve the issue: increase the MaxStringContentLength property on the XmlDictionaryReadersQuotas object that the BasicHttpBinding class is using. 

According to MSDN the MaxStringContentLength property:

 "Gets or sets the maximum size for a message that can be received on a channel configured with this binding."

The purpose of this restriction being the possiblity of DoS attacks on a publicaly exposed service caused by arbitrarily large messages. Since I am using TransferMode.Buffered, the message size is bound by the size of the MaxReceivedMessageSize by default.

This is the end result:

Tags: , , ,

XSLT Transformation Extension for XDocument

by jmorris 15. June 2009 22:49

I needed to update a utilities class I have been using to support XDocument (not just XmlDocuments) when I hit upon creating an extension method instead of the typical static utility class/method approach. Extension methods are a simple and powerful means of adding behavior to existing classes without breaking encapsulation. I was initially skeptical of the idea, but they have turned out to be rather nice and syntically better than static utilities classes.

The XDocument class is key part of the LINQ to XML API released with .NET Framework 3.5. Essentially it's a 'next' generation replacement for the XmlDocument class with added functionality for easily modifying in-memory Xml documents. Overall I prefer XDocument over XmlDocumentfor various reasons, but learning a new API can be a bit frustrating; it takes time to build a knowledge base of all of the 'gotchas' and 'hacks' ;)

Anyways, here is the final result:



Note that by convention I named the class after the class I was extending and added the 'Extensions' post fix. This makes things a little easier to manage. Here is the usage:

 

Tags: , , , , ,

Jeff Morris

Tag cloud

Month List

Page List