Home > .NET > Using Amazon S3 with Silverlight and RIA services

Using Amazon S3 with Silverlight and RIA services

I’ve been working on something recently that gave me a chance to try out Amazon S3 services. The first thing to mention is that I am using the SOAP API and not REST. Another thing worth noting is that I used a web reference rather than a service reference to link to the S3 services.

The rest of this post will assume you have an amazon S3 account. You can add a reference to the S3 services by right clicking on your web project and selecting ‘add web reference’. You will then see the dialog below.

After I added the web reference I changed the end point address in my web.config file to match the name of my S3 bucket. The end point will look something like this:

<applicationSettings> <EMASolutions.CMS.Service.Properties.Settings> <setting name="EMASolutions_CMS_Service_AWS_AmazonS3" serializeAs="String"> <value>https://s3.amazonaws.com/soap</value> </setting> </EMASolutions.CMS.Service.Properties.Settings> </applicationSettings>

I changed it to point to my S3 Bucket by changing the value tag.

<applicationSettings> <EMASolutions.CMS.Service.Properties.Settings> <setting name="EMASolutions_CMS_Service_AWS_AmazonS3" serializeAs="String"> <value>https://hiberlog-storage.s3.amazonaws.com/soap</value> </setting> </EMASolutions.CMS.Service.Properties.Settings> </applicationSettings>

The common code required to generate a signature to pass to S3 services was pretty small and not really a maintenance headache but I didn’t want to include it in the silverlight app and the server side web app. I wanted to keep it common between the two platforms. Below is an example of the helper class I used, it is only concerned with generating timestamps in the correct format and an S3 signature.

public enum Operations { ListAllMyBuckets = 0, ListBucket = 1, GetObject = 2, GetObjectInline = 3, PutObject = 4, PutObjectInline = 5, } public class Helper { private const String AwsTsFormat = "yyyy-MM-ddTHH:mm:ss.fffZ"; private const String AwsAction = "AmazonS3"; public static DateTime TimeStamp { get { var currentDateTime = DateTime.Now; return new DateTime( currentDateTime.Year, currentDateTime.Month, currentDateTime.Day, currentDateTime.Hour, currentDateTime.Minute, currentDateTime.Second, currentDateTime.Millisecond, DateTimeKind.Local ); } } public static String GetSignature(String secret, Operations operation, DateTime timestamp) { var enc = new UTF8Encoding(); var hash = new HMACSHA1(enc.GetBytes(secret)); var sig = AwsAction + operation + timestamp .ToUniversalTime() .ToString(AwsTsFormat); return Convert.ToBase64String( hash.ComputeHash(enc.GetBytes(sig.ToCharArray())) ); } }

I couldn’t really see a nice way of not duplicating this code but in the end I settled with exposing the S3 methods in a RIA service and storing the common code on the server.

For example, the ListAllMyBuckets method on S3 returns a ListAllMyBucketsResult which has a property Buckets of type ListAllMyBucketsEntry, in order to expose this type to the client we need to define a key on the ListAllMyBucketsEntry class. Luckily the generated class from the web reference uses a partial class. This allows us to implement some Metadata, as follows:

[MetadataTypeAttribute(typeof(ListAllMyBucketsEntryMetadata))] public partial class ListAllMyBucketsEntry { public Guid Id { get { return Guid.NewGuid(); } set { throw new InvalidOperationException("The Id field on ListAllMyBucketsEntryMetadata is not meant to be set"); } } internal sealed class ListAllMyBucketsEntryMetadata { // Metadata classes are not meant to be instantiated. private ListAllMyBucketsEntryMetadata() { } [Key] [Display(AutoGenerateField = false)] public Guid Id; } }

This metadata defines a new property called Id that we can use to allow ListAllMyBucketsEntry to be projected to the client.

We can now call the RIA services as we would normally which would call the S3 services to get a list of ListAllMyBucketsEntry and bind to it as we wish.

In my example the RIA service was getting rather chunky so I refactored the common S3 stuff away into another class.

public class Wrapper : IWrapper { internal static IRepository<IApplicationConfiguration> AppSettings = With.IoC.GetService<IRepository<IApplicationConfiguration>>(); internal static String Salt; internal static String MainApiKey; internal static String SecretApiKey; internal static String RootBucket; public Wrapper() { Salt = AppSettings .Matching("Key", "common.encryptionkey") .First().Value; MainApiKey = AppSettings .Matching("Key", "common.amazonaws.mainkey") .First().Value.Decrypt(Salt); SecretApiKey = AppSettings .Matching("Key", "common.amazonaws.secretkey") .First().Value.Decrypt(Salt); RootBucket = AppSettings .Matching("Key", "common.amazonaws.rootbucket") .First().Value.Decrypt(Salt); } public ListAllMyBucketsEntry[] ListAllMyBuckets() { try { var s3 = new AmazonS3(); return s3.ListAllMyBuckets( MainApiKey, Helper.TimeStamp, true, Helper.GetSignature( SecretApiKey, Operations.ListAllMyBuckets, Helper.TimeStamp ) ).Buckets; } catch (Exception e) { With.Logger.Get("application"); With.Logger.Log(e.Message, e, LogInformationType.Error); throw; } } public IList<ListEntry> ListBucket(String bucket) { try { var s3 = new AmazonS3(); return s3.ListBucket( bucket, null, null, 100, true, "/", MainApiKey, Helper.TimeStamp, true, Helper.GetSignature( SecretApiKey, Operations.ListBucket, Helper.TimeStamp ), null ).Contents.ToList(); } catch (Exception e) { With.Logger.Get("application"); With.Logger.Log(e.Message, e, LogInformationType.Error); throw; } } public List<PutObjectResult> PutObject(String key, Byte[] data) { try { var s3 = new AmazonS3(); var results = new List<PutObjectResult>(); var metadataEntries = new MetadataEntry[2]; metadataEntries[0] = new MetadataEntry { Name = "Content-Type", Value = "text/xml" }; metadataEntries[1] = new MetadataEntry { Name= "ContentLength", Value = data.Length.ToString() }; results.Add(s3.PutObjectInline( RootBucket, key, metadataEntries, data, data.Length, null, StorageClass.STANDARD, true, MainApiKey, Helper.TimeStamp, true, Helper.GetSignature( SecretApiKey, Operations.PutObjectInline, Helper.TimeStamp ), null )); return results; } catch (Exception e) { With.Logger.Get("application"); With.Logger.Log(e.Message, e, LogInformationType.Error); throw; } } public List<GetObjectResult> GetObject(String key) { try { var s3 = new AmazonS3(); var results = new List<GetObjectResult>(); results.Add(s3.GetObject( RootBucket, key, true, true, true, MainApiKey, Helper.TimeStamp, true, Helper.GetSignature( SecretApiKey, Operations.GetObject, Helper.TimeStamp ), null )); return results; } catch (Exception e) { With.Logger.Get("application"); With.Logger.Log(e.Message, e, LogInformationType.Error); throw; } } }

This leaves the RIA service looking like so.

[EnableClientAccess] public class AmazonS3Service : DomainService { private static readonly IWrapper AmazonS3Wrapper = With.IoC.GetService<IWrapper>(); public ListAllMyBucketsEntry[] ListAllMyBuckets() { return AmazonS3Wrapper.ListAllMyBuckets(); } public IList<ListEntry> ListBucket(String bucket) { return AmazonS3Wrapper.ListBucket(bucket); } public List<PutObjectResult> PutObject(String key, Byte[] data) { return AmazonS3Wrapper.PutObject(key, data); } public List<GetObjectResult> GetObject(String key) { return AmazonS3Wrapper.GetObject(key); } }

This works fine but I’m not sure about the hoping, we have to hit the RIA service and then the S3 service.

So, I figure it would be prudent to implement some kind of caching. If we take an online storage system as an example and assume a user has requested a file, there is nothing to stop us holding onto this file in a cache and releasing it at a later date. This minimises the number of calls made to the S3 services.

Anyway, we can call the RIA service from our silverlight app as follows. (Keep in mind this is just an example). _context is defined as private readonly AmazonS3Context _context2;

private void StackPanel_MouseLeftButtonDown(object sender, MouseButtonEventArgs e) { if (sfd.ShowDialog() == false) return; _context2.Load( _context2.GetObjectQuery("test0.txt"), LoadBehavior.RefreshCurrent, delegate { Dispatcher.BeginInvoke( delegate { var test = _context2.GetObjectResults.First().Data; using (var fs = sfd.OpenFile()) { fs.Write(test, 0, test.Length); fs.Close(); } } ); }, null); }

Another example for uploading files is below, again its just an example.

ofd.Filter = "Text Files|*.txt|Xml Files|*.xml|All Files|*.*"; ofd.FilterIndex = 1; ofd.Multiselect = true; if (ofd.ShowDialog() == false) return; foreach(var file in ofd.Files) { Byte[] data = new Byte[file.Length]; file.OpenRead().Read(data, 0, data.Length); _context2.Load( _context2.PutObjectQuery(file.Name, data), LoadBehavior.RefreshCurrent, delegate { Dispatcher.BeginInvoke( delegate { MessageBox.Show("Uploaded"); } ); }, null); }

kick it on DotNetKicks.com

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: